Jan 21 00:56:55.996968 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 22:19:08 -00 2026 Jan 21 00:56:55.997002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:56:55.997012 kernel: BIOS-provided physical RAM map: Jan 21 00:56:55.997018 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 00:56:55.997024 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 21 00:56:55.997030 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 21 00:56:55.997040 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 21 00:56:55.997047 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 21 00:56:55.997053 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 21 00:56:55.997060 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 21 00:56:55.997066 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 21 00:56:55.997072 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 21 00:56:55.997078 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 21 00:56:55.997085 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 21 00:56:55.997095 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 21 00:56:55.997102 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 21 00:56:55.997108 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 21 00:56:55.997115 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 21 00:56:55.997122 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 21 00:56:55.997128 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 21 00:56:55.997136 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 21 00:56:55.997143 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 21 00:56:55.997150 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 21 00:56:55.997168 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 21 00:56:55.997175 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 21 00:56:55.997181 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 21 00:56:55.997188 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 21 00:56:55.997194 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 21 00:56:55.997201 kernel: NX (Execute Disable) protection: active Jan 21 00:56:55.997208 kernel: APIC: Static calls initialized Jan 21 00:56:55.997215 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 21 00:56:55.997223 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 21 00:56:55.997230 kernel: extended physical RAM map: Jan 21 00:56:55.997237 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 21 00:56:55.997243 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 21 00:56:55.997250 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 21 00:56:55.997257 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 21 00:56:55.997263 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 21 00:56:55.997270 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 21 00:56:55.997277 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 21 00:56:55.997288 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 21 00:56:55.997295 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 21 00:56:55.997302 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 21 00:56:55.997309 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 21 00:56:55.997318 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 21 00:56:55.997325 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 21 00:56:55.997332 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 21 00:56:55.997339 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 21 00:56:55.997346 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 21 00:56:55.997353 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 21 00:56:55.997360 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 21 00:56:55.997367 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 21 00:56:55.997374 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 21 00:56:55.997381 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 21 00:56:55.997388 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 21 00:56:55.997397 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 21 00:56:55.997404 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 21 00:56:55.997411 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 21 00:56:55.997418 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 21 00:56:55.997425 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 21 00:56:55.997432 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 21 00:56:55.997439 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 21 00:56:55.997446 kernel: efi: EFI v2.7 by EDK II Jan 21 00:56:55.997453 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 21 00:56:55.997460 kernel: random: crng init done Jan 21 00:56:55.997468 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 21 00:56:55.997477 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 21 00:56:55.997484 kernel: secureboot: Secure boot disabled Jan 21 00:56:55.997491 kernel: SMBIOS 2.8 present. Jan 21 00:56:55.997498 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 21 00:56:55.997505 kernel: DMI: Memory slots populated: 1/1 Jan 21 00:56:55.997512 kernel: Hypervisor detected: KVM Jan 21 00:56:55.997519 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 21 00:56:55.997526 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 21 00:56:55.997533 kernel: kvm-clock: using sched offset of 5537415420 cycles Jan 21 00:56:55.997540 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 21 00:56:55.997550 kernel: tsc: Detected 2294.608 MHz processor Jan 21 00:56:55.997558 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 21 00:56:55.997566 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 21 00:56:55.997573 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 21 00:56:55.997581 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 21 00:56:55.997589 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 21 00:56:55.997596 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 21 00:56:55.997604 kernel: Using GB pages for direct mapping Jan 21 00:56:55.997613 kernel: ACPI: Early table checksum verification disabled Jan 21 00:56:55.997621 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 21 00:56:55.997629 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 21 00:56:55.997636 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 00:56:55.997644 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 00:56:55.997651 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 21 00:56:55.997659 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 00:56:55.997668 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 00:56:55.997676 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 21 00:56:55.997683 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 21 00:56:55.997690 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 21 00:56:55.997698 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 21 00:56:55.997705 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 21 00:56:55.997712 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 21 00:56:55.997722 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 21 00:56:55.997729 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 21 00:56:55.997737 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 21 00:56:55.997744 kernel: No NUMA configuration found Jan 21 00:56:55.997751 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 21 00:56:55.997759 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 21 00:56:55.997767 kernel: Zone ranges: Jan 21 00:56:55.997774 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 21 00:56:55.997784 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 21 00:56:55.997791 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 21 00:56:55.997798 kernel: Device empty Jan 21 00:56:55.997806 kernel: Movable zone start for each node Jan 21 00:56:55.997813 kernel: Early memory node ranges Jan 21 00:56:55.997821 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 21 00:56:55.997828 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 21 00:56:55.997835 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 21 00:56:55.997844 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 21 00:56:55.997852 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 21 00:56:55.997859 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 21 00:56:55.997867 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 21 00:56:55.997881 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 21 00:56:55.997891 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 21 00:56:55.997898 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 21 00:56:55.997906 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 21 00:56:55.997914 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 00:56:55.997925 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 21 00:56:55.997933 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 21 00:56:55.997941 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 21 00:56:55.997949 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 21 00:56:55.997959 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 21 00:56:55.997968 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 21 00:56:55.997976 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 21 00:56:55.997984 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 21 00:56:55.997992 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 21 00:56:55.998001 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 21 00:56:55.998009 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 21 00:56:55.998017 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 21 00:56:55.998027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 21 00:56:55.998035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 21 00:56:55.998043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 21 00:56:55.998051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 21 00:56:55.998059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 21 00:56:55.998067 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 21 00:56:55.998076 kernel: TSC deadline timer available Jan 21 00:56:55.998085 kernel: CPU topo: Max. logical packages: 2 Jan 21 00:56:55.998094 kernel: CPU topo: Max. logical dies: 2 Jan 21 00:56:55.998102 kernel: CPU topo: Max. dies per package: 1 Jan 21 00:56:55.998109 kernel: CPU topo: Max. threads per core: 1 Jan 21 00:56:55.998117 kernel: CPU topo: Num. cores per package: 1 Jan 21 00:56:55.998125 kernel: CPU topo: Num. threads per package: 1 Jan 21 00:56:55.998134 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 21 00:56:55.998141 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 21 00:56:55.998158 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 21 00:56:55.998167 kernel: kvm-guest: setup PV sched yield Jan 21 00:56:55.998175 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 21 00:56:55.998183 kernel: Booting paravirtualized kernel on KVM Jan 21 00:56:55.998191 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 21 00:56:55.998199 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 21 00:56:55.998208 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 21 00:56:55.998218 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 21 00:56:55.998226 kernel: pcpu-alloc: [0] 0 1 Jan 21 00:56:55.998234 kernel: kvm-guest: PV spinlocks enabled Jan 21 00:56:55.998242 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 21 00:56:55.998251 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:56:55.998259 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 21 00:56:55.998269 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 21 00:56:55.998277 kernel: Fallback order for Node 0: 0 Jan 21 00:56:55.998285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 21 00:56:55.998293 kernel: Policy zone: Normal Jan 21 00:56:55.998302 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 21 00:56:55.998310 kernel: software IO TLB: area num 2. Jan 21 00:56:55.998318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 21 00:56:55.998329 kernel: ftrace: allocating 40097 entries in 157 pages Jan 21 00:56:55.998337 kernel: ftrace: allocated 157 pages with 5 groups Jan 21 00:56:55.998345 kernel: Dynamic Preempt: voluntary Jan 21 00:56:55.998363 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 21 00:56:55.998372 kernel: rcu: RCU event tracing is enabled. Jan 21 00:56:55.998381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 21 00:56:55.998389 kernel: Trampoline variant of Tasks RCU enabled. Jan 21 00:56:55.998397 kernel: Rude variant of Tasks RCU enabled. Jan 21 00:56:55.998407 kernel: Tracing variant of Tasks RCU enabled. Jan 21 00:56:55.998414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 21 00:56:55.998423 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 21 00:56:55.998431 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:56:55.998439 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:56:55.998447 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 21 00:56:55.998455 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 21 00:56:55.998465 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 21 00:56:55.998473 kernel: Console: colour dummy device 80x25 Jan 21 00:56:55.998481 kernel: printk: legacy console [tty0] enabled Jan 21 00:56:55.998490 kernel: printk: legacy console [ttyS0] enabled Jan 21 00:56:55.998498 kernel: ACPI: Core revision 20240827 Jan 21 00:56:55.998506 kernel: APIC: Switch to symmetric I/O mode setup Jan 21 00:56:55.998514 kernel: x2apic enabled Jan 21 00:56:55.998525 kernel: APIC: Switched APIC routing to: physical x2apic Jan 21 00:56:55.998533 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 21 00:56:55.998541 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 21 00:56:55.998549 kernel: kvm-guest: setup PV IPIs Jan 21 00:56:55.998557 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 21 00:56:55.998565 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 21 00:56:55.998573 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 21 00:56:55.998583 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 21 00:56:55.998591 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 21 00:56:55.998598 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 21 00:56:55.998606 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 21 00:56:55.998613 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 21 00:56:55.998621 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 21 00:56:55.998629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 21 00:56:55.998636 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 21 00:56:55.998644 kernel: TAA: Mitigation: Clear CPU buffers Jan 21 00:56:55.998651 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 21 00:56:55.998661 kernel: active return thunk: its_return_thunk Jan 21 00:56:55.998668 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 21 00:56:55.998676 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 21 00:56:55.998683 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 21 00:56:55.998691 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 21 00:56:55.998698 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 21 00:56:55.998706 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 21 00:56:55.998713 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 21 00:56:55.998721 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 21 00:56:55.998728 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 21 00:56:55.998737 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 21 00:56:55.998745 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 21 00:56:55.998752 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 21 00:56:55.998760 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 21 00:56:55.998767 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 21 00:56:55.998775 kernel: Freeing SMP alternatives memory: 32K Jan 21 00:56:55.998782 kernel: pid_max: default: 32768 minimum: 301 Jan 21 00:56:55.998790 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 21 00:56:55.998797 kernel: landlock: Up and running. Jan 21 00:56:55.998805 kernel: SELinux: Initializing. Jan 21 00:56:55.998812 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 00:56:55.998822 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 21 00:56:55.998829 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 21 00:56:55.998837 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 21 00:56:55.998845 kernel: ... version: 2 Jan 21 00:56:55.998853 kernel: ... bit width: 48 Jan 21 00:56:55.998861 kernel: ... generic registers: 8 Jan 21 00:56:55.998869 kernel: ... value mask: 0000ffffffffffff Jan 21 00:56:55.998877 kernel: ... max period: 00007fffffffffff Jan 21 00:56:55.998887 kernel: ... fixed-purpose events: 3 Jan 21 00:56:55.998895 kernel: ... event mask: 00000007000000ff Jan 21 00:56:55.998903 kernel: signal: max sigframe size: 3632 Jan 21 00:56:55.998911 kernel: rcu: Hierarchical SRCU implementation. Jan 21 00:56:55.998919 kernel: rcu: Max phase no-delay instances is 400. Jan 21 00:56:55.998927 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 21 00:56:55.998935 kernel: smp: Bringing up secondary CPUs ... Jan 21 00:56:55.998945 kernel: smpboot: x86: Booting SMP configuration: Jan 21 00:56:55.998953 kernel: .... node #0, CPUs: #1 Jan 21 00:56:55.998961 kernel: smp: Brought up 1 node, 2 CPUs Jan 21 00:56:55.998969 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 21 00:56:55.998978 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 212136K reserved, 0K cma-reserved) Jan 21 00:56:55.998986 kernel: devtmpfs: initialized Jan 21 00:56:55.998994 kernel: x86/mm: Memory block size: 128MB Jan 21 00:56:55.999002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 21 00:56:55.999012 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 21 00:56:55.999020 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 21 00:56:55.999028 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 21 00:56:55.999036 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 21 00:56:55.999044 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 21 00:56:55.999052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 21 00:56:55.999063 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 21 00:56:55.999071 kernel: pinctrl core: initialized pinctrl subsystem Jan 21 00:56:55.999079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 21 00:56:55.999087 kernel: audit: initializing netlink subsys (disabled) Jan 21 00:56:55.999095 kernel: audit: type=2000 audit(1768957011.905:1): state=initialized audit_enabled=0 res=1 Jan 21 00:56:55.999103 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 21 00:56:55.999111 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 21 00:56:55.999119 kernel: cpuidle: using governor menu Jan 21 00:56:55.999128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 21 00:56:55.999136 kernel: dca service started, version 1.12.1 Jan 21 00:56:55.999144 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 21 00:56:55.999172 kernel: PCI: Using configuration type 1 for base access Jan 21 00:56:55.999180 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 21 00:56:55.999189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 21 00:56:55.999197 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 21 00:56:55.999207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 21 00:56:55.999215 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 21 00:56:55.999223 kernel: ACPI: Added _OSI(Module Device) Jan 21 00:56:55.999231 kernel: ACPI: Added _OSI(Processor Device) Jan 21 00:56:55.999239 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 21 00:56:55.999248 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 21 00:56:55.999256 kernel: ACPI: Interpreter enabled Jan 21 00:56:55.999266 kernel: ACPI: PM: (supports S0 S3 S5) Jan 21 00:56:55.999274 kernel: ACPI: Using IOAPIC for interrupt routing Jan 21 00:56:55.999282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 21 00:56:55.999290 kernel: PCI: Using E820 reservations for host bridge windows Jan 21 00:56:55.999298 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 21 00:56:55.999306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 21 00:56:55.999474 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 21 00:56:55.999581 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 21 00:56:55.999679 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 21 00:56:55.999689 kernel: PCI host bridge to bus 0000:00 Jan 21 00:56:56.001594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 21 00:56:56.001695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 21 00:56:56.001788 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 21 00:56:56.001875 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 21 00:56:56.001963 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 21 00:56:56.002049 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 21 00:56:56.002136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 21 00:56:56.003350 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 21 00:56:56.003473 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 21 00:56:56.003573 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 21 00:56:56.003674 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 21 00:56:56.003769 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 21 00:56:56.003864 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 21 00:56:56.003964 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 21 00:56:56.004069 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.005361 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 21 00:56:56.005486 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 21 00:56:56.005587 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 21 00:56:56.005686 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 21 00:56:56.005790 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.005895 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.005992 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 21 00:56:56.006088 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 21 00:56:56.006205 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 21 00:56:56.006307 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 21 00:56:56.006423 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.006523 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 21 00:56:56.006620 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 21 00:56:56.006715 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 21 00:56:56.006811 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 21 00:56:56.006919 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.007017 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 21 00:56:56.007117 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 21 00:56:56.007226 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 21 00:56:56.007324 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 21 00:56:56.007427 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.007527 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 21 00:56:56.007625 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 21 00:56:56.007721 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 21 00:56:56.007817 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 21 00:56:56.007919 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.008015 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 21 00:56:56.008113 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 21 00:56:56.008224 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 21 00:56:56.008321 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 21 00:56:56.008423 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.008532 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 21 00:56:56.008633 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 21 00:56:56.008728 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 21 00:56:56.008824 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 21 00:56:56.008933 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.009029 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 21 00:56:56.009125 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 21 00:56:56.009251 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 21 00:56:56.009348 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 21 00:56:56.009457 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.009556 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 21 00:56:56.009653 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 21 00:56:56.009749 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 21 00:56:56.009848 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 21 00:56:56.009956 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.010077 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 21 00:56:56.010185 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 21 00:56:56.010283 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 21 00:56:56.010390 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 21 00:56:56.010495 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.010591 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 21 00:56:56.010687 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 21 00:56:56.010783 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 21 00:56:56.010877 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 21 00:56:56.010976 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.011074 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 21 00:56:56.011181 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 21 00:56:56.011277 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 21 00:56:56.011390 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 21 00:56:56.011496 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.011592 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 21 00:56:56.011687 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 21 00:56:56.011781 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 21 00:56:56.011876 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 21 00:56:56.011979 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.012075 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 21 00:56:56.012179 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 21 00:56:56.012275 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 21 00:56:56.012370 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 21 00:56:56.012470 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.012568 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 21 00:56:56.012663 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 21 00:56:56.012758 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 21 00:56:56.012852 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 21 00:56:56.012951 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.013048 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 21 00:56:56.013145 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 21 00:56:56.014385 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 21 00:56:56.014511 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 21 00:56:56.014624 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.014721 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 21 00:56:56.014816 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 21 00:56:56.014918 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 21 00:56:56.015013 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 21 00:56:56.015114 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.015219 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 21 00:56:56.015313 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 21 00:56:56.015408 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 21 00:56:56.015505 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 21 00:56:56.015606 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.015700 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 21 00:56:56.015794 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 21 00:56:56.015888 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 21 00:56:56.015981 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 21 00:56:56.016086 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.016190 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 21 00:56:56.016285 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 21 00:56:56.016378 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 21 00:56:56.016472 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 21 00:56:56.018150 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.018279 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 21 00:56:56.018384 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 21 00:56:56.018479 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 21 00:56:56.018574 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 21 00:56:56.018676 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.018772 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 21 00:56:56.018870 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 21 00:56:56.018964 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 21 00:56:56.019057 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 21 00:56:56.019208 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.019317 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 21 00:56:56.019412 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 21 00:56:56.019518 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 21 00:56:56.019613 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 21 00:56:56.019713 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.019811 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 21 00:56:56.019906 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 21 00:56:56.020000 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 21 00:56:56.020094 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 21 00:56:56.021120 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.021242 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 21 00:56:56.021343 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 21 00:56:56.021438 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 21 00:56:56.021535 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 21 00:56:56.021638 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.021734 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 21 00:56:56.021828 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 21 00:56:56.021924 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 21 00:56:56.022018 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 21 00:56:56.022117 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.022890 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 21 00:56:56.023003 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 21 00:56:56.023101 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 21 00:56:56.023212 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 21 00:56:56.023316 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.023413 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 21 00:56:56.023508 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 21 00:56:56.023603 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 21 00:56:56.023697 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 21 00:56:56.023801 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 21 00:56:56.023897 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 21 00:56:56.023992 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 21 00:56:56.024087 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 21 00:56:56.024214 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 21 00:56:56.024319 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 21 00:56:56.024417 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 21 00:56:56.024519 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 21 00:56:56.024615 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 21 00:56:56.024710 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 21 00:56:56.024810 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 21 00:56:56.024907 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 21 00:56:56.025011 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 21 00:56:56.025108 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 21 00:56:56.025213 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 21 00:56:56.026991 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 21 00:56:56.027097 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 21 00:56:56.027235 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.027333 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 21 00:56:56.027438 kernel: pci_bus 0000:02: extended config space not accessible Jan 21 00:56:56.027451 kernel: acpiphp: Slot [1] registered Jan 21 00:56:56.027460 kernel: acpiphp: Slot [0] registered Jan 21 00:56:56.027469 kernel: acpiphp: Slot [2] registered Jan 21 00:56:56.027480 kernel: acpiphp: Slot [3] registered Jan 21 00:56:56.027489 kernel: acpiphp: Slot [4] registered Jan 21 00:56:56.027498 kernel: acpiphp: Slot [5] registered Jan 21 00:56:56.027506 kernel: acpiphp: Slot [6] registered Jan 21 00:56:56.027515 kernel: acpiphp: Slot [7] registered Jan 21 00:56:56.027523 kernel: acpiphp: Slot [8] registered Jan 21 00:56:56.027531 kernel: acpiphp: Slot [9] registered Jan 21 00:56:56.027542 kernel: acpiphp: Slot [10] registered Jan 21 00:56:56.027551 kernel: acpiphp: Slot [11] registered Jan 21 00:56:56.027559 kernel: acpiphp: Slot [12] registered Jan 21 00:56:56.027568 kernel: acpiphp: Slot [13] registered Jan 21 00:56:56.027576 kernel: acpiphp: Slot [14] registered Jan 21 00:56:56.027585 kernel: acpiphp: Slot [15] registered Jan 21 00:56:56.027593 kernel: acpiphp: Slot [16] registered Jan 21 00:56:56.027601 kernel: acpiphp: Slot [17] registered Jan 21 00:56:56.027612 kernel: acpiphp: Slot [18] registered Jan 21 00:56:56.027621 kernel: acpiphp: Slot [19] registered Jan 21 00:56:56.027629 kernel: acpiphp: Slot [20] registered Jan 21 00:56:56.027638 kernel: acpiphp: Slot [21] registered Jan 21 00:56:56.027646 kernel: acpiphp: Slot [22] registered Jan 21 00:56:56.027663 kernel: acpiphp: Slot [23] registered Jan 21 00:56:56.027681 kernel: acpiphp: Slot [24] registered Jan 21 00:56:56.027704 kernel: acpiphp: Slot [25] registered Jan 21 00:56:56.027722 kernel: acpiphp: Slot [26] registered Jan 21 00:56:56.027739 kernel: acpiphp: Slot [27] registered Jan 21 00:56:56.027759 kernel: acpiphp: Slot [28] registered Jan 21 00:56:56.027777 kernel: acpiphp: Slot [29] registered Jan 21 00:56:56.027795 kernel: acpiphp: Slot [30] registered Jan 21 00:56:56.027812 kernel: acpiphp: Slot [31] registered Jan 21 00:56:56.028024 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 21 00:56:56.028146 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 21 00:56:56.028274 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 21 00:56:56.028286 kernel: acpiphp: Slot [0-2] registered Jan 21 00:56:56.028389 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 21 00:56:56.028488 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 21 00:56:56.028590 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 21 00:56:56.028688 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 21 00:56:56.028785 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 21 00:56:56.028796 kernel: acpiphp: Slot [0-3] registered Jan 21 00:56:56.028898 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 21 00:56:56.028997 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 21 00:56:56.029097 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 21 00:56:56.029216 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 21 00:56:56.029228 kernel: acpiphp: Slot [0-4] registered Jan 21 00:56:56.029330 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 21 00:56:56.029429 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 21 00:56:56.029524 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 21 00:56:56.029538 kernel: acpiphp: Slot [0-5] registered Jan 21 00:56:56.029641 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 21 00:56:56.029737 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 21 00:56:56.029834 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 21 00:56:56.029933 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 21 00:56:56.029944 kernel: acpiphp: Slot [0-6] registered Jan 21 00:56:56.030042 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 21 00:56:56.030054 kernel: acpiphp: Slot [0-7] registered Jan 21 00:56:56.030148 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 21 00:56:56.030687 kernel: acpiphp: Slot [0-8] registered Jan 21 00:56:56.030808 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 21 00:56:56.030820 kernel: acpiphp: Slot [0-9] registered Jan 21 00:56:56.030917 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 21 00:56:56.030934 kernel: acpiphp: Slot [0-10] registered Jan 21 00:56:56.031029 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 21 00:56:56.031041 kernel: acpiphp: Slot [0-11] registered Jan 21 00:56:56.031137 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 21 00:56:56.031149 kernel: acpiphp: Slot [0-12] registered Jan 21 00:56:56.031805 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 21 00:56:56.031822 kernel: acpiphp: Slot [0-13] registered Jan 21 00:56:56.031929 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 21 00:56:56.031942 kernel: acpiphp: Slot [0-14] registered Jan 21 00:56:56.032039 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 21 00:56:56.032051 kernel: acpiphp: Slot [0-15] registered Jan 21 00:56:56.032147 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 21 00:56:56.032184 kernel: acpiphp: Slot [0-16] registered Jan 21 00:56:56.032282 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 21 00:56:56.032294 kernel: acpiphp: Slot [0-17] registered Jan 21 00:56:56.032390 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 21 00:56:56.032401 kernel: acpiphp: Slot [0-18] registered Jan 21 00:56:56.032496 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 21 00:56:56.032508 kernel: acpiphp: Slot [0-19] registered Jan 21 00:56:56.032606 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 21 00:56:56.032617 kernel: acpiphp: Slot [0-20] registered Jan 21 00:56:56.032712 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 21 00:56:56.032723 kernel: acpiphp: Slot [0-21] registered Jan 21 00:56:56.032817 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 21 00:56:56.032829 kernel: acpiphp: Slot [0-22] registered Jan 21 00:56:56.032926 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 21 00:56:56.032937 kernel: acpiphp: Slot [0-23] registered Jan 21 00:56:56.033032 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 21 00:56:56.033044 kernel: acpiphp: Slot [0-24] registered Jan 21 00:56:56.033139 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 21 00:56:56.033150 kernel: acpiphp: Slot [0-25] registered Jan 21 00:56:56.035016 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 21 00:56:56.035033 kernel: acpiphp: Slot [0-26] registered Jan 21 00:56:56.035130 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 21 00:56:56.035142 kernel: acpiphp: Slot [0-27] registered Jan 21 00:56:56.035256 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 21 00:56:56.035267 kernel: acpiphp: Slot [0-28] registered Jan 21 00:56:56.035363 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 21 00:56:56.035377 kernel: acpiphp: Slot [0-29] registered Jan 21 00:56:56.035471 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 21 00:56:56.035483 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 21 00:56:56.035492 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 21 00:56:56.035500 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 21 00:56:56.035509 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 21 00:56:56.035518 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 21 00:56:56.035529 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 21 00:56:56.035538 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 21 00:56:56.035546 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 21 00:56:56.035555 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 21 00:56:56.035563 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 21 00:56:56.035571 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 21 00:56:56.035580 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 21 00:56:56.035591 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 21 00:56:56.035599 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 21 00:56:56.035608 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 21 00:56:56.035616 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 21 00:56:56.035625 kernel: iommu: Default domain type: Translated Jan 21 00:56:56.035634 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 21 00:56:56.035643 kernel: efivars: Registered efivars operations Jan 21 00:56:56.035653 kernel: PCI: Using ACPI for IRQ routing Jan 21 00:56:56.035661 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 21 00:56:56.035670 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 21 00:56:56.035678 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 21 00:56:56.035686 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 21 00:56:56.035695 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 21 00:56:56.035703 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 21 00:56:56.035713 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 21 00:56:56.035721 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 21 00:56:56.035730 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 21 00:56:56.035738 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 21 00:56:56.035834 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 21 00:56:56.035930 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 21 00:56:56.036029 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 21 00:56:56.036045 kernel: vgaarb: loaded Jan 21 00:56:56.036054 kernel: clocksource: Switched to clocksource kvm-clock Jan 21 00:56:56.036062 kernel: VFS: Disk quotas dquot_6.6.0 Jan 21 00:56:56.036071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 21 00:56:56.036080 kernel: pnp: PnP ACPI init Jan 21 00:56:56.036201 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 21 00:56:56.036217 kernel: pnp: PnP ACPI: found 5 devices Jan 21 00:56:56.036226 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 21 00:56:56.036235 kernel: NET: Registered PF_INET protocol family Jan 21 00:56:56.036244 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 21 00:56:56.036252 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 21 00:56:56.036261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 21 00:56:56.036274 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 21 00:56:56.036288 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 21 00:56:56.036297 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 21 00:56:56.036306 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 00:56:56.036314 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 21 00:56:56.036323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 21 00:56:56.036331 kernel: NET: Registered PF_XDP protocol family Jan 21 00:56:56.036436 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 21 00:56:56.036536 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 21 00:56:56.036635 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 21 00:56:56.036733 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 21 00:56:56.036830 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 21 00:56:56.036928 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 21 00:56:56.037026 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 21 00:56:56.037128 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 21 00:56:56.039285 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 21 00:56:56.039398 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 21 00:56:56.039499 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 21 00:56:56.039601 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 21 00:56:56.039701 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 21 00:56:56.039800 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 21 00:56:56.039904 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 21 00:56:56.040003 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 21 00:56:56.040102 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 21 00:56:56.040217 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 21 00:56:56.040318 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 21 00:56:56.040418 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 21 00:56:56.040519 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 21 00:56:56.040617 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 21 00:56:56.040714 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 21 00:56:56.040812 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 21 00:56:56.040908 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 21 00:56:56.041004 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 21 00:56:56.041103 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 21 00:56:56.042909 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 21 00:56:56.043025 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 21 00:56:56.043129 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 21 00:56:56.043287 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 21 00:56:56.043388 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 21 00:56:56.043487 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 21 00:56:56.043591 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 21 00:56:56.043690 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 21 00:56:56.043788 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 21 00:56:56.043887 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 21 00:56:56.043985 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 21 00:56:56.044082 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 21 00:56:56.044190 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 21 00:56:56.044288 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 21 00:56:56.044385 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 21 00:56:56.044481 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.044577 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.044673 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.044770 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.044870 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.044966 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.045061 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.045636 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.045758 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.045856 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.045957 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.046052 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.046148 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.046252 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.046360 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.046458 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.046558 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.048233 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.048348 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.048446 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.048542 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.048639 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.048735 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.048835 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.048931 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.049026 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.049121 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.050271 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.050393 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.050496 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.050591 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 21 00:56:56.050686 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 21 00:56:56.050781 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 21 00:56:56.050878 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 21 00:56:56.050974 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 21 00:56:56.051069 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 21 00:56:56.052173 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 21 00:56:56.052285 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 21 00:56:56.052384 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 21 00:56:56.052482 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 21 00:56:56.052579 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 21 00:56:56.052676 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 21 00:56:56.052777 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 21 00:56:56.052873 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.052968 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.053064 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.054598 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.054726 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.054828 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.054940 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.055039 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.055137 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.055247 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.055345 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.055440 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.055541 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.055638 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.055734 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.055828 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.055926 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.056021 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.056118 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.056907 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.057012 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.057109 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.057229 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.057325 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.057422 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.057521 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.057618 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.057714 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.057810 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 21 00:56:56.057905 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 21 00:56:56.058006 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 21 00:56:56.058105 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 21 00:56:56.059276 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 21 00:56:56.059397 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.059499 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 21 00:56:56.059598 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 21 00:56:56.059693 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 21 00:56:56.059788 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.059891 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 21 00:56:56.059991 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 21 00:56:56.060086 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 21 00:56:56.060620 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 21 00:56:56.060731 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 21 00:56:56.060828 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 21 00:56:56.060925 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 21 00:56:56.061021 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 21 00:56:56.061115 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 21 00:56:56.061233 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 21 00:56:56.061331 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 21 00:56:56.061425 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 21 00:56:56.061519 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 21 00:56:56.061614 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 21 00:56:56.061709 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 21 00:56:56.061803 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 21 00:56:56.061903 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 21 00:56:56.061997 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 21 00:56:56.062091 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 21 00:56:56.062202 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 21 00:56:56.062298 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 21 00:56:56.062406 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 21 00:56:56.062502 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 21 00:56:56.062598 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 21 00:56:56.062693 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 21 00:56:56.062789 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 21 00:56:56.062884 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 21 00:56:56.062979 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 21 00:56:56.063075 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 21 00:56:56.063183 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 21 00:56:56.063278 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 21 00:56:56.063375 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 21 00:56:56.063469 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 21 00:56:56.063563 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 21 00:56:56.063659 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 21 00:56:56.063753 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 21 00:56:56.063850 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 21 00:56:56.063947 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 21 00:56:56.064042 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 21 00:56:56.064135 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 21 00:56:56.064848 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 21 00:56:56.064956 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 21 00:56:56.065055 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 21 00:56:56.065165 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 21 00:56:56.065262 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 21 00:56:56.065358 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 21 00:56:56.065455 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 21 00:56:56.065551 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 21 00:56:56.065646 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 21 00:56:56.065741 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 21 00:56:56.065842 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 21 00:56:56.065936 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 21 00:56:56.066033 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 21 00:56:56.066128 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 21 00:56:56.066232 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 21 00:56:56.066327 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 21 00:56:56.066436 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 21 00:56:56.066531 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 21 00:56:56.066628 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 21 00:56:56.066722 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 21 00:56:56.066817 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 21 00:56:56.067071 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 21 00:56:56.067189 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 21 00:56:56.067286 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 21 00:56:56.067380 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 21 00:56:56.067475 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 21 00:56:56.067573 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 21 00:56:56.067667 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 21 00:56:56.067765 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 21 00:56:56.067860 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 21 00:56:56.067955 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 21 00:56:56.068049 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 21 00:56:56.068144 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 21 00:56:56.068277 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 21 00:56:56.068377 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 21 00:56:56.068474 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 21 00:56:56.068569 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 21 00:56:56.068663 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 21 00:56:56.068760 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 21 00:56:56.068855 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 21 00:56:56.068952 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 21 00:56:56.069046 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 21 00:56:56.069143 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 21 00:56:56.069479 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 21 00:56:56.069575 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 21 00:56:56.069669 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 21 00:56:56.069770 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 21 00:56:56.069864 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 21 00:56:56.069959 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 21 00:56:56.070053 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 21 00:56:56.070158 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 21 00:56:56.070543 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 21 00:56:56.070648 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 21 00:56:56.070743 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 21 00:56:56.070840 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 21 00:56:56.070936 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 21 00:56:56.071030 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 21 00:56:56.071125 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 21 00:56:56.071235 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 21 00:56:56.071324 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 21 00:56:56.071410 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 21 00:56:56.071497 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 21 00:56:56.071582 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 21 00:56:56.071667 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 21 00:56:56.071766 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 21 00:56:56.071857 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 21 00:56:56.071946 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.072042 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 21 00:56:56.072135 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 21 00:56:56.072260 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 21 00:56:56.072359 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 21 00:56:56.072449 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 21 00:56:56.072544 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 21 00:56:56.072633 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 21 00:56:56.072726 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 21 00:56:56.072818 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 21 00:56:56.072914 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 21 00:56:56.073003 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 21 00:56:56.073098 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 21 00:56:56.073196 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 21 00:56:56.073294 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 21 00:56:56.073387 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 21 00:56:56.073482 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 21 00:56:56.073571 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 21 00:56:56.073665 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 21 00:56:56.073754 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 21 00:56:56.073850 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 21 00:56:56.073941 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 21 00:56:56.074037 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 21 00:56:56.074126 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 21 00:56:56.074237 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 21 00:56:56.074329 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 21 00:56:56.074437 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 21 00:56:56.074528 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 21 00:56:56.074623 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 21 00:56:56.074716 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 21 00:56:56.074812 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 21 00:56:56.074903 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 21 00:56:56.074995 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 21 00:56:56.075084 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 21 00:56:56.076242 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 21 00:56:56.076356 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 21 00:56:56.076446 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 21 00:56:56.076542 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 21 00:56:56.076631 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 21 00:56:56.076720 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 21 00:56:56.076817 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 21 00:56:56.076907 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 21 00:56:56.076995 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 21 00:56:56.077092 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 21 00:56:56.077191 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 21 00:56:56.077280 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 21 00:56:56.077377 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 21 00:56:56.077465 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 21 00:56:56.077554 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 21 00:56:56.077647 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 21 00:56:56.077736 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 21 00:56:56.077826 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 21 00:56:56.077919 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 21 00:56:56.078008 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 21 00:56:56.078096 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 21 00:56:56.079246 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 21 00:56:56.079354 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 21 00:56:56.079449 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 21 00:56:56.079544 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 21 00:56:56.079634 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 21 00:56:56.079724 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 21 00:56:56.079818 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 21 00:56:56.079907 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 21 00:56:56.079998 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 21 00:56:56.080093 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 21 00:56:56.080198 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 21 00:56:56.080288 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 21 00:56:56.080386 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 21 00:56:56.080479 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 21 00:56:56.080568 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 21 00:56:56.080663 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 21 00:56:56.080753 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 21 00:56:56.080842 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 21 00:56:56.080854 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 21 00:56:56.080865 kernel: PCI: CLS 0 bytes, default 64 Jan 21 00:56:56.080874 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 21 00:56:56.080883 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 21 00:56:56.080892 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 21 00:56:56.080900 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 21 00:56:56.080909 kernel: Initialise system trusted keyrings Jan 21 00:56:56.080918 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 21 00:56:56.080929 kernel: Key type asymmetric registered Jan 21 00:56:56.080937 kernel: Asymmetric key parser 'x509' registered Jan 21 00:56:56.080946 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 21 00:56:56.080955 kernel: io scheduler mq-deadline registered Jan 21 00:56:56.080963 kernel: io scheduler kyber registered Jan 21 00:56:56.080971 kernel: io scheduler bfq registered Jan 21 00:56:56.081076 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 21 00:56:56.082739 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 21 00:56:56.082855 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 21 00:56:56.082955 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 21 00:56:56.083054 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 21 00:56:56.083186 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 21 00:56:56.083296 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 21 00:56:56.083393 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 21 00:56:56.083491 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 21 00:56:56.083587 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 21 00:56:56.083684 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 21 00:56:56.083782 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 21 00:56:56.083879 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 21 00:56:56.084071 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 21 00:56:56.084183 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 21 00:56:56.084282 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 21 00:56:56.084298 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 21 00:56:56.084396 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 21 00:56:56.084494 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 21 00:56:56.084591 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 21 00:56:56.084688 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 21 00:56:56.084789 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 21 00:56:56.084886 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 21 00:56:56.084984 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 21 00:56:56.085079 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 21 00:56:56.085185 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 21 00:56:56.085284 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 21 00:56:56.085382 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 21 00:56:56.085477 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 21 00:56:56.085574 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 21 00:56:56.085679 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 21 00:56:56.085779 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 21 00:56:56.085875 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 21 00:56:56.085886 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 21 00:56:56.085982 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 21 00:56:56.086078 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 21 00:56:56.086183 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 21 00:56:56.086279 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 21 00:56:56.086387 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 21 00:56:56.086484 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 21 00:56:56.086581 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 21 00:56:56.086677 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 21 00:56:56.086775 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 21 00:56:56.086871 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 21 00:56:56.086967 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 21 00:56:56.087065 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 21 00:56:56.087168 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 21 00:56:56.087267 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 21 00:56:56.087363 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 21 00:56:56.087459 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 21 00:56:56.087470 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 21 00:56:56.087567 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 21 00:56:56.087662 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 21 00:56:56.087766 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 21 00:56:56.087862 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 21 00:56:56.087962 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 21 00:56:56.088057 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 21 00:56:56.088171 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 21 00:56:56.088273 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 21 00:56:56.088370 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 21 00:56:56.088466 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 21 00:56:56.088477 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 21 00:56:56.088486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 21 00:56:56.088495 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 21 00:56:56.088504 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 21 00:56:56.088515 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 21 00:56:56.088524 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 21 00:56:56.088625 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 21 00:56:56.088718 kernel: rtc_cmos 00:03: registered as rtc0 Jan 21 00:56:56.088729 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 21 00:56:56.088817 kernel: rtc_cmos 00:03: setting system clock to 2026-01-21T00:56:54 UTC (1768957014) Jan 21 00:56:56.088910 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 21 00:56:56.088921 kernel: intel_pstate: CPU model not supported Jan 21 00:56:56.088930 kernel: efifb: probing for efifb Jan 21 00:56:56.088939 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 21 00:56:56.088947 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 21 00:56:56.088956 kernel: efifb: scrolling: redraw Jan 21 00:56:56.088965 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 21 00:56:56.088976 kernel: Console: switching to colour frame buffer device 160x50 Jan 21 00:56:56.088985 kernel: fb0: EFI VGA frame buffer device Jan 21 00:56:56.088994 kernel: pstore: Using crash dump compression: deflate Jan 21 00:56:56.089002 kernel: pstore: Registered efi_pstore as persistent store backend Jan 21 00:56:56.089011 kernel: NET: Registered PF_INET6 protocol family Jan 21 00:56:56.089020 kernel: Segment Routing with IPv6 Jan 21 00:56:56.089029 kernel: In-situ OAM (IOAM) with IPv6 Jan 21 00:56:56.089040 kernel: NET: Registered PF_PACKET protocol family Jan 21 00:56:56.089048 kernel: Key type dns_resolver registered Jan 21 00:56:56.089058 kernel: IPI shorthand broadcast: enabled Jan 21 00:56:56.089066 kernel: sched_clock: Marking stable (2362077704, 153825050)->(2814432615, -298529861) Jan 21 00:56:56.089075 kernel: registered taskstats version 1 Jan 21 00:56:56.089084 kernel: Loading compiled-in X.509 certificates Jan 21 00:56:56.089092 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 169e95345ec0c7da7389f5f6d7b9c06dfd352178' Jan 21 00:56:56.089103 kernel: Demotion targets for Node 0: null Jan 21 00:56:56.089112 kernel: Key type .fscrypt registered Jan 21 00:56:56.089120 kernel: Key type fscrypt-provisioning registered Jan 21 00:56:56.089128 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 21 00:56:56.089137 kernel: ima: Allocated hash algorithm: sha1 Jan 21 00:56:56.089146 kernel: ima: No architecture policies found Jan 21 00:56:56.089160 kernel: clk: Disabling unused clocks Jan 21 00:56:56.089169 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 21 00:56:56.089180 kernel: Write protecting the kernel read-only data: 47104k Jan 21 00:56:56.089188 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 21 00:56:56.089197 kernel: Run /init as init process Jan 21 00:56:56.089205 kernel: with arguments: Jan 21 00:56:56.089214 kernel: /init Jan 21 00:56:56.089223 kernel: with environment: Jan 21 00:56:56.089231 kernel: HOME=/ Jan 21 00:56:56.089242 kernel: TERM=linux Jan 21 00:56:56.089250 kernel: SCSI subsystem initialized Jan 21 00:56:56.089259 kernel: libata version 3.00 loaded. Jan 21 00:56:56.089361 kernel: ahci 0000:00:1f.2: version 3.0 Jan 21 00:56:56.089374 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 21 00:56:56.089469 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 21 00:56:56.089565 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 21 00:56:56.089663 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 21 00:56:56.089773 kernel: scsi host0: ahci Jan 21 00:56:56.089881 kernel: scsi host1: ahci Jan 21 00:56:56.090008 kernel: scsi host2: ahci Jan 21 00:56:56.090109 kernel: scsi host3: ahci Jan 21 00:56:56.090228 kernel: scsi host4: ahci Jan 21 00:56:56.090331 kernel: scsi host5: ahci Jan 21 00:56:56.090344 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 21 00:56:56.090362 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 21 00:56:56.090371 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 21 00:56:56.090380 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 21 00:56:56.090392 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 21 00:56:56.090401 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 21 00:56:56.090410 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090419 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090428 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090437 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090445 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090456 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 21 00:56:56.090464 kernel: ACPI: bus type USB registered Jan 21 00:56:56.090474 kernel: usbcore: registered new interface driver usbfs Jan 21 00:56:56.090482 kernel: usbcore: registered new interface driver hub Jan 21 00:56:56.090491 kernel: usbcore: registered new device driver usb Jan 21 00:56:56.090597 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 21 00:56:56.090698 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 21 00:56:56.090802 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 21 00:56:56.090903 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 21 00:56:56.091026 kernel: hub 1-0:1.0: USB hub found Jan 21 00:56:56.091134 kernel: hub 1-0:1.0: 2 ports detected Jan 21 00:56:56.091253 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 21 00:56:56.091354 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 21 00:56:56.091368 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 21 00:56:56.091378 kernel: GPT:25804799 != 104857599 Jan 21 00:56:56.091387 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 21 00:56:56.091396 kernel: GPT:25804799 != 104857599 Jan 21 00:56:56.091404 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 21 00:56:56.091412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 21 00:56:56.091423 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 21 00:56:56.091432 kernel: device-mapper: uevent: version 1.0.3 Jan 21 00:56:56.091441 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 21 00:56:56.091450 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 21 00:56:56.091459 kernel: raid6: avx512x4 gen() 42074 MB/s Jan 21 00:56:56.091467 kernel: raid6: avx512x2 gen() 45701 MB/s Jan 21 00:56:56.091476 kernel: raid6: avx512x1 gen() 44330 MB/s Jan 21 00:56:56.091487 kernel: raid6: avx2x4 gen() 33812 MB/s Jan 21 00:56:56.091495 kernel: raid6: avx2x2 gen() 33802 MB/s Jan 21 00:56:56.091504 kernel: raid6: avx2x1 gen() 26931 MB/s Jan 21 00:56:56.091514 kernel: raid6: using algorithm avx512x2 gen() 45701 MB/s Jan 21 00:56:56.091523 kernel: raid6: .... xor() 25229 MB/s, rmw enabled Jan 21 00:56:56.091534 kernel: raid6: using avx512x2 recovery algorithm Jan 21 00:56:56.091543 kernel: xor: automatically using best checksumming function avx Jan 21 00:56:56.091554 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 21 00:56:56.091563 kernel: BTRFS: device fsid 1d50d7f2-b244-4434-b37e-796fa0c23345 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (203) Jan 21 00:56:56.091572 kernel: BTRFS info (device dm-0): first mount of filesystem 1d50d7f2-b244-4434-b37e-796fa0c23345 Jan 21 00:56:56.091693 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 21 00:56:56.091707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:56:56.091716 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 21 00:56:56.091727 kernel: BTRFS info (device dm-0): enabling free space tree Jan 21 00:56:56.091736 kernel: loop: module loaded Jan 21 00:56:56.091744 kernel: loop0: detected capacity change from 0 to 100552 Jan 21 00:56:56.091753 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 21 00:56:56.091764 systemd[1]: Successfully made /usr/ read-only. Jan 21 00:56:56.091777 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 00:56:56.091789 systemd[1]: Detected virtualization kvm. Jan 21 00:56:56.091798 systemd[1]: Detected architecture x86-64. Jan 21 00:56:56.091807 systemd[1]: Running in initrd. Jan 21 00:56:56.091816 systemd[1]: No hostname configured, using default hostname. Jan 21 00:56:56.091826 systemd[1]: Hostname set to . Jan 21 00:56:56.091835 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 00:56:56.091846 systemd[1]: Queued start job for default target initrd.target. Jan 21 00:56:56.091855 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 00:56:56.091865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:56:56.091874 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:56:56.091884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 21 00:56:56.091893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 00:56:56.091906 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 21 00:56:56.091915 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 21 00:56:56.091924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:56:56.091934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:56:56.091944 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 21 00:56:56.091953 systemd[1]: Reached target paths.target - Path Units. Jan 21 00:56:56.091962 systemd[1]: Reached target slices.target - Slice Units. Jan 21 00:56:56.091974 systemd[1]: Reached target swap.target - Swaps. Jan 21 00:56:56.091983 systemd[1]: Reached target timers.target - Timer Units. Jan 21 00:56:56.091992 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 00:56:56.092002 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 00:56:56.092011 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:56:56.092020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 21 00:56:56.092029 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 21 00:56:56.094194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:56:56.094209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 00:56:56.094223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:56:56.094233 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 00:56:56.094243 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 21 00:56:56.094253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 21 00:56:56.094265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 00:56:56.094274 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 21 00:56:56.094284 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 21 00:56:56.094293 systemd[1]: Starting systemd-fsck-usr.service... Jan 21 00:56:56.094306 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 00:56:56.094316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 00:56:56.094327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:56:56.094337 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 21 00:56:56.094357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:56:56.094367 systemd[1]: Finished systemd-fsck-usr.service. Jan 21 00:56:56.094377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 21 00:56:56.094418 systemd-journald[341]: Collecting audit messages is enabled. Jan 21 00:56:56.094444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 21 00:56:56.094454 kernel: audit: type=1130 audit(1768957016.012:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:56:56.094477 kernel: audit: type=1130 audit(1768957016.020:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 21 00:56:56.094496 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 21 00:56:56.094506 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 00:56:56.094515 kernel: Bridge firewalling registered Jan 21 00:56:56.094526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 00:56:56.094536 kernel: audit: type=1130 audit(1768957016.049:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094545 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 00:56:56.094554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:56:56.094564 kernel: audit: type=1130 audit(1768957016.062:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 00:56:56.094584 kernel: audit: type=1130 audit(1768957016.070:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094594 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 21 00:56:56.094604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:56:56.094614 kernel: audit: type=1130 audit(1768957016.086:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.094623 kernel: audit: type=1334 audit(1768957016.090:8): prog-id=6 op=LOAD Jan 21 00:56:56.094633 systemd-journald[341]: Journal started Jan 21 00:56:56.094656 systemd-journald[341]: Runtime Journal (/run/log/journal/2dd4da5146244a399a483b58a97a363b) is 8M, max 77.9M, 69.9M free. Jan 21 00:56:56.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.090000 audit: BPF prog-id=6 op=LOAD Jan 21 00:56:56.036301 systemd-modules-load[342]: Inserted module 'br_netfilter' Jan 21 00:56:56.098181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 00:56:56.100713 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 00:56:56.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.106542 kernel: audit: type=1130 audit(1768957016.101:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.110701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 00:56:56.115780 dracut-cmdline[364]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=febd26d0ecadb4f9abb44f6b2a89e793f13258cbb011a4bfe78289e5448c772a Jan 21 00:56:56.126999 systemd-tmpfiles[386]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 21 00:56:56.132742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:56:56.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.140530 kernel: audit: type=1130 audit(1768957016.136:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.153362 systemd-resolved[366]: Positive Trust Anchors: Jan 21 00:56:56.154116 systemd-resolved[366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 00:56:56.154120 systemd-resolved[366]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 00:56:56.154165 systemd-resolved[366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 00:56:56.182099 systemd-resolved[366]: Defaulting to hostname 'linux'. Jan 21 00:56:56.183793 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 00:56:56.185094 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:56:56.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.213179 kernel: Loading iSCSI transport class v2.0-870. Jan 21 00:56:56.229187 kernel: iscsi: registered transport (tcp) Jan 21 00:56:56.252504 kernel: iscsi: registered transport (qla4xxx) Jan 21 00:56:56.252580 kernel: QLogic iSCSI HBA Driver Jan 21 00:56:56.278362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 00:56:56.307839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:56:56.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.310931 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 00:56:56.352838 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 21 00:56:56.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.354882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 21 00:56:56.356023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 21 00:56:56.390922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 21 00:56:56.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.392000 audit: BPF prog-id=7 op=LOAD Jan 21 00:56:56.392000 audit: BPF prog-id=8 op=LOAD Jan 21 00:56:56.392710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:56:56.418807 systemd-udevd[614]: Using default interface naming scheme 'v257'. Jan 21 00:56:56.427780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:56:56.430514 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 21 00:56:56.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.457945 dracut-pre-trigger[685]: rd.md=0: removing MD RAID activation Jan 21 00:56:56.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.458065 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 00:56:56.460000 audit: BPF prog-id=9 op=LOAD Jan 21 00:56:56.461214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 00:56:56.488140 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 00:56:56.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.491302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 00:56:56.504635 systemd-networkd[727]: lo: Link UP Jan 21 00:56:56.504642 systemd-networkd[727]: lo: Gained carrier Jan 21 00:56:56.506589 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 00:56:56.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.508115 systemd[1]: Reached target network.target - Network. Jan 21 00:56:56.578120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:56:56.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.581618 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 21 00:56:56.718546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 21 00:56:56.729677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 21 00:56:56.742385 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 21 00:56:56.745234 kernel: cryptd: max_cpu_qlen set to 1000 Jan 21 00:56:56.756452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 21 00:56:56.758656 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 21 00:56:56.761495 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 21 00:56:56.769248 kernel: AES CTR mode by8 optimization enabled Jan 21 00:56:56.773277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:56:56.781274 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 21 00:56:56.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.773446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:56:56.779246 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:56:56.781742 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:56:56.798301 kernel: usbcore: registered new interface driver usbhid Jan 21 00:56:56.798328 kernel: usbhid: USB HID core driver Jan 21 00:56:56.781746 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 00:56:56.782053 systemd-networkd[727]: eth0: Link UP Jan 21 00:56:56.783501 systemd-networkd[727]: eth0: Gained carrier Jan 21 00:56:56.783512 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:56:56.789775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:56:56.792261 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.94/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 21 00:56:56.806551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:56:56.807057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:56:56.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.816934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:56:56.819797 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Jan 21 00:56:56.824719 disk-uuid[852]: Primary Header is updated. Jan 21 00:56:56.824719 disk-uuid[852]: Secondary Entries is updated. Jan 21 00:56:56.824719 disk-uuid[852]: Secondary Header is updated. Jan 21 00:56:56.833216 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 21 00:56:56.859069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:56:56.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.932583 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 21 00:56:56.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:56.934199 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 00:56:56.935238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:56:56.936102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 00:56:56.937860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 21 00:56:56.962316 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 21 00:56:56.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.894654 disk-uuid[868]: Warning: The kernel is still using the old partition table. Jan 21 00:56:57.894654 disk-uuid[868]: The new table will be used at the next reboot or after you Jan 21 00:56:57.894654 disk-uuid[868]: run partprobe(8) or kpartx(8) Jan 21 00:56:57.894654 disk-uuid[868]: The operation has completed successfully. Jan 21 00:56:57.904810 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 21 00:56:57.913291 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 21 00:56:57.913318 kernel: audit: type=1130 audit(1768957017.905:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.913332 kernel: audit: type=1131 audit(1768957017.905:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.904913 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 21 00:56:57.908319 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 21 00:56:57.955178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 21 00:56:57.958657 kernel: BTRFS info (device vda6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:56:57.958720 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:56:57.967428 kernel: BTRFS info (device vda6): turning on async discard Jan 21 00:56:57.967494 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 00:56:57.974166 kernel: BTRFS info (device vda6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:56:57.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:57.974550 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 21 00:56:57.978310 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 21 00:56:57.979386 kernel: audit: type=1130 audit(1768957017.974:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.009289 systemd-networkd[727]: eth0: Gained IPv6LL Jan 21 00:56:58.188377 ignition[932]: Ignition 2.24.0 Jan 21 00:56:58.188390 ignition[932]: Stage: fetch-offline Jan 21 00:56:58.188426 ignition[932]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:56:58.189716 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 00:56:58.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.188436 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:56:58.188523 ignition[932]: parsed url from cmdline: "" Jan 21 00:56:58.188527 ignition[932]: no config URL provided Jan 21 00:56:58.195228 kernel: audit: type=1130 audit(1768957018.190:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.188532 ignition[932]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 00:56:58.188539 ignition[932]: no config at "/usr/lib/ignition/user.ign" Jan 21 00:56:58.188543 ignition[932]: failed to fetch config: resource requires networking Jan 21 00:56:58.188697 ignition[932]: Ignition finished successfully Jan 21 00:56:58.196294 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 21 00:56:58.220291 ignition[938]: Ignition 2.24.0 Jan 21 00:56:58.220303 ignition[938]: Stage: fetch Jan 21 00:56:58.220444 ignition[938]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:56:58.220452 ignition[938]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:56:58.220538 ignition[938]: parsed url from cmdline: "" Jan 21 00:56:58.220541 ignition[938]: no config URL provided Jan 21 00:56:58.220549 ignition[938]: reading system config file "/usr/lib/ignition/user.ign" Jan 21 00:56:58.220556 ignition[938]: no config at "/usr/lib/ignition/user.ign" Jan 21 00:56:58.221359 ignition[938]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 21 00:56:58.221949 ignition[938]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 21 00:56:58.221973 ignition[938]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 21 00:56:58.584287 ignition[938]: GET result: OK Jan 21 00:56:58.584453 ignition[938]: parsing config with SHA512: 67a48b30008e1134431e1bf332a2bae30b709afe84042ba32bee6239344e83a44db749a633f2239fe3d3ac96e4941c479fcf48ee32b46c9752a2620428d9a616 Jan 21 00:56:58.590550 unknown[938]: fetched base config from "system" Jan 21 00:56:58.591203 unknown[938]: fetched base config from "system" Jan 21 00:56:58.591214 unknown[938]: fetched user config from "openstack" Jan 21 00:56:58.591599 ignition[938]: fetch: fetch complete Jan 21 00:56:58.591605 ignition[938]: fetch: fetch passed Jan 21 00:56:58.591670 ignition[938]: Ignition finished successfully Jan 21 00:56:58.593952 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 21 00:56:58.597836 kernel: audit: type=1130 audit(1768957018.594:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.595578 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 21 00:56:58.630392 ignition[944]: Ignition 2.24.0 Jan 21 00:56:58.630403 ignition[944]: Stage: kargs Jan 21 00:56:58.630561 ignition[944]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:56:58.630569 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:56:58.636308 kernel: audit: type=1130 audit(1768957018.632:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.632646 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 21 00:56:58.631353 ignition[944]: kargs: kargs passed Jan 21 00:56:58.635268 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 21 00:56:58.631391 ignition[944]: Ignition finished successfully Jan 21 00:56:58.662827 ignition[950]: Ignition 2.24.0 Jan 21 00:56:58.663189 ignition[950]: Stage: disks Jan 21 00:56:58.663342 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 21 00:56:58.663351 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:56:58.663953 ignition[950]: disks: disks passed Jan 21 00:56:58.669232 kernel: audit: type=1130 audit(1768957018.665:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.665402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 21 00:56:58.663991 ignition[950]: Ignition finished successfully Jan 21 00:56:58.666207 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 21 00:56:58.669571 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 21 00:56:58.670199 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 00:56:58.670841 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 00:56:58.671567 systemd[1]: Reached target basic.target - Basic System. Jan 21 00:56:58.673032 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 21 00:56:58.723563 systemd-fsck[958]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 21 00:56:58.726373 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 21 00:56:58.730435 kernel: audit: type=1130 audit(1768957018.726:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:58.728726 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 21 00:56:58.898173 kernel: EXT4-fs (vda9): mounted filesystem cf9e7296-d0ad-4d9a-b030-d4e17a1c88bf r/w with ordered data mode. Quota mode: none. Jan 21 00:56:58.898346 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 21 00:56:58.899464 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 21 00:56:58.904960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 00:56:58.916864 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 21 00:56:58.918980 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 21 00:56:58.921297 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 21 00:56:58.922191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 21 00:56:58.922222 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 00:56:58.931238 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 21 00:56:58.933245 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 21 00:56:58.939196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Jan 21 00:56:58.943535 kernel: BTRFS info (device vda6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:56:58.943576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:56:58.958394 kernel: BTRFS info (device vda6): turning on async discard Jan 21 00:56:58.958453 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 00:56:58.959971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 00:56:59.026183 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:56:59.158990 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 21 00:56:59.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:59.164208 kernel: audit: type=1130 audit(1768957019.160:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:59.165245 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 21 00:56:59.166842 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 21 00:56:59.181514 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 21 00:56:59.184183 kernel: BTRFS info (device vda6): last unmount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:56:59.219047 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 21 00:56:59.220881 ignition[1067]: INFO : Ignition 2.24.0 Jan 21 00:56:59.220881 ignition[1067]: INFO : Stage: mount Jan 21 00:56:59.220881 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:56:59.220881 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:56:59.220881 ignition[1067]: INFO : mount: mount passed Jan 21 00:56:59.220881 ignition[1067]: INFO : Ignition finished successfully Jan 21 00:56:59.228541 kernel: audit: type=1130 audit(1768957019.221:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:59.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:59.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:56:59.222693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 21 00:57:00.076196 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:02.081177 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:06.091194 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:06.098442 coreos-metadata[968]: Jan 21 00:57:06.098 WARN failed to locate config-drive, using the metadata service API instead Jan 21 00:57:06.113395 coreos-metadata[968]: Jan 21 00:57:06.113 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 21 00:57:06.279212 coreos-metadata[968]: Jan 21 00:57:06.279 INFO Fetch successful Jan 21 00:57:06.279848 coreos-metadata[968]: Jan 21 00:57:06.279 INFO wrote hostname ci-4547-0-0-n-1ed4874c6e to /sysroot/etc/hostname Jan 21 00:57:06.281321 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 21 00:57:06.291988 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 00:57:06.292017 kernel: audit: type=1130 audit(1768957026.281:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:06.292039 kernel: audit: type=1131 audit(1768957026.281:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:06.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:06.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:06.281439 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 21 00:57:06.284785 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 21 00:57:06.315394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 21 00:57:06.361176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1085) Jan 21 00:57:06.364180 kernel: BTRFS info (device vda6): first mount of filesystem f0e9d057-8632-47ff-9f6c-54c0e93bf1a9 Jan 21 00:57:06.367212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 21 00:57:06.374838 kernel: BTRFS info (device vda6): turning on async discard Jan 21 00:57:06.374915 kernel: BTRFS info (device vda6): enabling free space tree Jan 21 00:57:06.377013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 21 00:57:06.406003 ignition[1103]: INFO : Ignition 2.24.0 Jan 21 00:57:06.406003 ignition[1103]: INFO : Stage: files Jan 21 00:57:06.407538 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:06.407538 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:57:06.407538 ignition[1103]: DEBUG : files: compiled without relabeling support, skipping Jan 21 00:57:06.409029 ignition[1103]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 21 00:57:06.409029 ignition[1103]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 21 00:57:06.415384 ignition[1103]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 21 00:57:06.415946 ignition[1103]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 21 00:57:06.418405 unknown[1103]: wrote ssh authorized keys file for user: core Jan 21 00:57:06.419137 ignition[1103]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 21 00:57:06.422245 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 00:57:06.423442 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 21 00:57:06.482739 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 21 00:57:06.604828 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 21 00:57:06.604828 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 00:57:06.606453 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:06.609479 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 21 00:57:06.884508 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 21 00:57:07.495189 ignition[1103]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 21 00:57:07.495189 ignition[1103]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 21 00:57:07.496911 ignition[1103]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 00:57:07.500313 ignition[1103]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 21 00:57:07.500313 ignition[1103]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 21 00:57:07.502684 ignition[1103]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 21 00:57:07.502684 ignition[1103]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 21 00:57:07.502684 ignition[1103]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 21 00:57:07.502684 ignition[1103]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 21 00:57:07.502684 ignition[1103]: INFO : files: files passed Jan 21 00:57:07.502684 ignition[1103]: INFO : Ignition finished successfully Jan 21 00:57:07.510998 kernel: audit: type=1130 audit(1768957027.504:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.503434 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 21 00:57:07.506668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 21 00:57:07.524379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 21 00:57:07.527976 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 21 00:57:07.530302 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 21 00:57:07.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.536236 kernel: audit: type=1130 audit(1768957027.531:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.536280 kernel: audit: type=1131 audit(1768957027.531:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.567557 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:07.568380 initrd-setup-root-after-ignition[1138]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:07.569107 initrd-setup-root-after-ignition[1134]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 21 00:57:07.570716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 00:57:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.571911 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 21 00:57:07.576207 kernel: audit: type=1130 audit(1768957027.571:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.577333 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 21 00:57:07.614106 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 21 00:57:07.614249 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 21 00:57:07.622571 kernel: audit: type=1130 audit(1768957027.615:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.622608 kernel: audit: type=1131 audit(1768957027.615:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.615592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 21 00:57:07.623047 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 21 00:57:07.624130 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 21 00:57:07.625007 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 21 00:57:07.649278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 00:57:07.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.654753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 21 00:57:07.655855 kernel: audit: type=1130 audit(1768957027.649:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.673047 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 21 00:57:07.673314 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:57:07.674410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:57:07.675341 systemd[1]: Stopped target timers.target - Timer Units. Jan 21 00:57:07.676259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 21 00:57:07.680755 kernel: audit: type=1131 audit(1768957027.676:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.676380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 21 00:57:07.680844 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 21 00:57:07.681408 systemd[1]: Stopped target basic.target - Basic System. Jan 21 00:57:07.682344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 21 00:57:07.683347 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 21 00:57:07.684239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 21 00:57:07.685110 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 21 00:57:07.686020 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 21 00:57:07.686912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 21 00:57:07.687846 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 21 00:57:07.688690 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 21 00:57:07.689555 systemd[1]: Stopped target swap.target - Swaps. Jan 21 00:57:07.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.690441 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 21 00:57:07.690553 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 21 00:57:07.691860 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:57:07.692391 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:57:07.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.693167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 21 00:57:07.693242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:57:07.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.693996 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 21 00:57:07.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.694098 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 21 00:57:07.695315 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 21 00:57:07.695411 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 21 00:57:07.696256 systemd[1]: ignition-files.service: Deactivated successfully. Jan 21 00:57:07.696344 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 21 00:57:07.699325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 21 00:57:07.699878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 21 00:57:07.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.699985 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:57:07.702361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 21 00:57:07.702765 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 21 00:57:07.702867 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:57:07.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.706744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 21 00:57:07.706845 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:57:07.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.707686 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 21 00:57:07.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.707773 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 21 00:57:07.715149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 21 00:57:07.716321 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 21 00:57:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.726537 ignition[1158]: INFO : Ignition 2.24.0 Jan 21 00:57:07.726537 ignition[1158]: INFO : Stage: umount Jan 21 00:57:07.729281 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 21 00:57:07.729281 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 21 00:57:07.729281 ignition[1158]: INFO : umount: umount passed Jan 21 00:57:07.729281 ignition[1158]: INFO : Ignition finished successfully Jan 21 00:57:07.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.728786 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 21 00:57:07.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.728894 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 21 00:57:07.731064 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 21 00:57:07.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.731107 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 21 00:57:07.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.732343 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 21 00:57:07.732394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 21 00:57:07.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.733123 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 21 00:57:07.733180 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 21 00:57:07.733844 systemd[1]: Stopped target network.target - Network. Jan 21 00:57:07.734698 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 21 00:57:07.734743 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 21 00:57:07.735907 systemd[1]: Stopped target paths.target - Path Units. Jan 21 00:57:07.738568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 21 00:57:07.742269 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:57:07.742722 systemd[1]: Stopped target slices.target - Slice Units. Jan 21 00:57:07.743765 systemd[1]: Stopped target sockets.target - Socket Units. Jan 21 00:57:07.744587 systemd[1]: iscsid.socket: Deactivated successfully. Jan 21 00:57:07.744629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 21 00:57:07.747702 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 21 00:57:07.749104 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 21 00:57:07.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.749506 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 21 00:57:07.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.749531 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:57:07.749894 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 21 00:57:07.749947 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 21 00:57:07.750471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 21 00:57:07.750509 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 21 00:57:07.751198 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 21 00:57:07.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.751815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 21 00:57:07.754917 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 21 00:57:07.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.755460 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 21 00:57:07.755547 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 21 00:57:07.756815 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 21 00:57:07.756899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 21 00:57:07.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.759677 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 21 00:57:07.759795 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 21 00:57:07.762366 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 21 00:57:07.762452 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 21 00:57:07.763000 audit: BPF prog-id=6 op=UNLOAD Jan 21 00:57:07.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.765420 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 21 00:57:07.765857 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 21 00:57:07.766000 audit: BPF prog-id=9 op=UNLOAD Jan 21 00:57:07.765899 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:57:07.768289 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 21 00:57:07.768665 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 21 00:57:07.768719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 21 00:57:07.769145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 21 00:57:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.770723 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:57:07.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.771516 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 21 00:57:07.771910 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 21 00:57:07.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.772688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:57:07.785432 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 21 00:57:07.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.785585 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:57:07.786514 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 21 00:57:07.786590 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 21 00:57:07.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.787257 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 21 00:57:07.787287 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:57:07.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.787949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 21 00:57:07.787991 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 21 00:57:07.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.789515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 21 00:57:07.789558 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 21 00:57:07.791141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 21 00:57:07.791220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 21 00:57:07.797277 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 21 00:57:07.797702 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 21 00:57:07.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.797752 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:57:07.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.798530 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 21 00:57:07.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.798569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:57:07.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.799241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:57:07.799275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:07.800861 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 21 00:57:07.800940 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 21 00:57:07.811427 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 21 00:57:07.811537 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 21 00:57:07.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:07.812654 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 21 00:57:07.814016 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 21 00:57:07.833870 systemd[1]: Switching root. Jan 21 00:57:07.868123 systemd-journald[341]: Journal stopped Jan 21 00:57:09.102617 systemd-journald[341]: Received SIGTERM from PID 1 (systemd). Jan 21 00:57:09.102706 kernel: SELinux: policy capability network_peer_controls=1 Jan 21 00:57:09.102733 kernel: SELinux: policy capability open_perms=1 Jan 21 00:57:09.102744 kernel: SELinux: policy capability extended_socket_class=1 Jan 21 00:57:09.102757 kernel: SELinux: policy capability always_check_network=0 Jan 21 00:57:09.102771 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 21 00:57:09.102783 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 21 00:57:09.102794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 21 00:57:09.102808 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 21 00:57:09.102821 kernel: SELinux: policy capability userspace_initial_context=0 Jan 21 00:57:09.102832 systemd[1]: Successfully loaded SELinux policy in 70.612ms. Jan 21 00:57:09.102852 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.390ms. Jan 21 00:57:09.102865 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 21 00:57:09.102877 systemd[1]: Detected virtualization kvm. Jan 21 00:57:09.102889 systemd[1]: Detected architecture x86-64. Jan 21 00:57:09.102900 systemd[1]: Detected first boot. Jan 21 00:57:09.102915 systemd[1]: Hostname set to . Jan 21 00:57:09.102926 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 21 00:57:09.102938 zram_generator::config[1201]: No configuration found. Jan 21 00:57:09.102957 kernel: Guest personality initialized and is inactive Jan 21 00:57:09.102970 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 21 00:57:09.102982 kernel: Initialized host personality Jan 21 00:57:09.102993 kernel: NET: Registered PF_VSOCK protocol family Jan 21 00:57:09.103005 systemd[1]: Populated /etc with preset unit settings. Jan 21 00:57:09.103017 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 21 00:57:09.103029 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 21 00:57:09.103041 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 21 00:57:09.103057 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 21 00:57:09.103071 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 21 00:57:09.103085 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 21 00:57:09.103097 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 21 00:57:09.103205 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 21 00:57:09.103221 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 21 00:57:09.103233 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 21 00:57:09.103245 systemd[1]: Created slice user.slice - User and Session Slice. Jan 21 00:57:09.103260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 21 00:57:09.103274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 21 00:57:09.103286 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 21 00:57:09.103298 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 21 00:57:09.103311 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 21 00:57:09.103327 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 21 00:57:09.103343 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 21 00:57:09.103355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 21 00:57:09.103366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 21 00:57:09.103377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 21 00:57:09.103389 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 21 00:57:09.103401 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 21 00:57:09.103412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 21 00:57:09.103426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 21 00:57:09.103438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 21 00:57:09.103449 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 21 00:57:09.103461 systemd[1]: Reached target slices.target - Slice Units. Jan 21 00:57:09.103472 systemd[1]: Reached target swap.target - Swaps. Jan 21 00:57:09.103484 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 21 00:57:09.103496 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 21 00:57:09.103508 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 21 00:57:09.103521 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 21 00:57:09.103534 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 21 00:57:09.103545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 21 00:57:09.103557 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 21 00:57:09.103569 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 21 00:57:09.103581 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 21 00:57:09.103592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 21 00:57:09.103606 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 21 00:57:09.103617 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 21 00:57:09.103629 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 21 00:57:09.103640 systemd[1]: Mounting media.mount - External Media Directory... Jan 21 00:57:09.103651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:09.103663 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 21 00:57:09.103675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 21 00:57:09.103687 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 21 00:57:09.103699 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 21 00:57:09.103710 systemd[1]: Reached target machines.target - Containers. Jan 21 00:57:09.103722 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 21 00:57:09.103733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:57:09.103745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 21 00:57:09.109234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 00:57:09.109250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:57:09.109262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 00:57:09.109274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:57:09.109287 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 00:57:09.109299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:57:09.109312 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 21 00:57:09.109324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 21 00:57:09.109335 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 21 00:57:09.109347 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 21 00:57:09.109360 systemd[1]: Stopped systemd-fsck-usr.service. Jan 21 00:57:09.109374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:57:09.109386 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 21 00:57:09.109398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 21 00:57:09.109410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 21 00:57:09.109422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 21 00:57:09.109434 kernel: fuse: init (API version 7.41) Jan 21 00:57:09.109451 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 21 00:57:09.109464 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 21 00:57:09.109477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:09.109489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 21 00:57:09.109501 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 21 00:57:09.109517 systemd[1]: Mounted media.mount - External Media Directory. Jan 21 00:57:09.109529 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 21 00:57:09.109541 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 21 00:57:09.109552 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 21 00:57:09.109563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 21 00:57:09.109575 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 00:57:09.109586 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 00:57:09.109599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:57:09.109610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:57:09.109625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:57:09.109636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:57:09.109648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 00:57:09.109660 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 00:57:09.109671 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:57:09.109684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:57:09.109696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 21 00:57:09.109708 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 21 00:57:09.109720 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 21 00:57:09.109731 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 21 00:57:09.109743 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 21 00:57:09.109758 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 21 00:57:09.109771 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 21 00:57:09.109784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 21 00:57:09.109796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:57:09.109809 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:57:09.109848 systemd-journald[1272]: Collecting audit messages is enabled. Jan 21 00:57:09.109872 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 21 00:57:09.109886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 00:57:09.109899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 21 00:57:09.109910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 00:57:09.109922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 21 00:57:09.109934 systemd-journald[1272]: Journal started Jan 21 00:57:09.109957 systemd-journald[1272]: Runtime Journal (/run/log/journal/2dd4da5146244a399a483b58a97a363b) is 8M, max 77.9M, 69.9M free. Jan 21 00:57:08.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:08.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:08.973000 audit: BPF prog-id=14 op=UNLOAD Jan 21 00:57:08.973000 audit: BPF prog-id=13 op=UNLOAD Jan 21 00:57:08.974000 audit: BPF prog-id=15 op=LOAD Jan 21 00:57:08.974000 audit: BPF prog-id=16 op=LOAD Jan 21 00:57:09.116912 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 21 00:57:09.116943 systemd[1]: Started systemd-journald.service - Journal Service. Jan 21 00:57:08.974000 audit: BPF prog-id=17 op=LOAD Jan 21 00:57:09.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.089000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 21 00:57:09.089000 audit[1272]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc9595fdf0 a2=4000 a3=0 items=0 ppid=1 pid=1272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:09.089000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 21 00:57:09.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:08.791264 systemd[1]: Queued start job for default target multi-user.target. Jan 21 00:57:09.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:08.804223 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 21 00:57:08.804643 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 21 00:57:09.117259 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 21 00:57:09.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.124078 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 21 00:57:09.126525 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 21 00:57:09.136173 kernel: ACPI: bus type drm_connector registered Jan 21 00:57:09.142819 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 21 00:57:09.145175 kernel: loop1: detected capacity change from 0 to 50784 Jan 21 00:57:09.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.152897 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 21 00:57:09.157027 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 00:57:09.157193 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 00:57:09.159259 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 21 00:57:09.162999 systemd-journald[1272]: Time spent on flushing to /var/log/journal/2dd4da5146244a399a483b58a97a363b is 95.361ms for 1842 entries. Jan 21 00:57:09.162999 systemd-journald[1272]: System Journal (/var/log/journal/2dd4da5146244a399a483b58a97a363b) is 8M, max 588.1M, 580.1M free. Jan 21 00:57:09.268869 systemd-journald[1272]: Received client request to flush runtime journal. Jan 21 00:57:09.268914 kernel: loop2: detected capacity change from 0 to 1656 Jan 21 00:57:09.268934 kernel: loop3: detected capacity change from 0 to 229808 Jan 21 00:57:09.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.182531 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 21 00:57:09.185403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 21 00:57:09.247035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 21 00:57:09.270830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 21 00:57:09.294833 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 21 00:57:09.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.296114 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 21 00:57:09.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.298000 audit: BPF prog-id=18 op=LOAD Jan 21 00:57:09.298000 audit: BPF prog-id=19 op=LOAD Jan 21 00:57:09.299000 audit: BPF prog-id=20 op=LOAD Jan 21 00:57:09.303000 audit: BPF prog-id=21 op=LOAD Jan 21 00:57:09.301332 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 21 00:57:09.308614 kernel: loop4: detected capacity change from 0 to 111560 Jan 21 00:57:09.307290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 21 00:57:09.309002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 21 00:57:09.317000 audit: BPF prog-id=22 op=LOAD Jan 21 00:57:09.318000 audit: BPF prog-id=23 op=LOAD Jan 21 00:57:09.318000 audit: BPF prog-id=24 op=LOAD Jan 21 00:57:09.320357 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 21 00:57:09.324000 audit: BPF prog-id=25 op=LOAD Jan 21 00:57:09.325000 audit: BPF prog-id=26 op=LOAD Jan 21 00:57:09.325000 audit: BPF prog-id=27 op=LOAD Jan 21 00:57:09.326470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 21 00:57:09.365856 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 21 00:57:09.365871 systemd-tmpfiles[1346]: ACLs are not supported, ignoring. Jan 21 00:57:09.371170 kernel: loop5: detected capacity change from 0 to 50784 Jan 21 00:57:09.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.373208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 21 00:57:09.384537 systemd-nsresourced[1347]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 21 00:57:09.386062 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 21 00:57:09.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.391896 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 21 00:57:09.410221 kernel: loop6: detected capacity change from 0 to 1656 Jan 21 00:57:09.431421 kernel: loop7: detected capacity change from 0 to 229808 Jan 21 00:57:09.466598 systemd-oomd[1344]: No swap; memory pressure usage will be degraded Jan 21 00:57:09.467045 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 21 00:57:09.470313 kernel: loop1: detected capacity change from 0 to 111560 Jan 21 00:57:09.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.481881 systemd-resolved[1345]: Positive Trust Anchors: Jan 21 00:57:09.482310 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 21 00:57:09.482351 systemd-resolved[1345]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 21 00:57:09.482416 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 21 00:57:09.502634 systemd-resolved[1345]: Using system hostname 'ci-4547-0-0-n-1ed4874c6e'. Jan 21 00:57:09.504306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 21 00:57:09.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.505374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 21 00:57:09.508319 (sd-merge)[1351]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 21 00:57:09.512060 (sd-merge)[1351]: Merged extensions into '/usr'. Jan 21 00:57:09.516393 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jan 21 00:57:09.516489 systemd[1]: Reloading... Jan 21 00:57:09.577177 zram_generator::config[1392]: No configuration found. Jan 21 00:57:09.745011 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 21 00:57:09.745491 systemd[1]: Reloading finished in 228 ms. Jan 21 00:57:09.762357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 21 00:57:09.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.772301 systemd[1]: Starting ensure-sysext.service... Jan 21 00:57:09.774424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 21 00:57:09.775000 audit: BPF prog-id=28 op=LOAD Jan 21 00:57:09.775000 audit: BPF prog-id=22 op=UNLOAD Jan 21 00:57:09.777000 audit: BPF prog-id=29 op=LOAD Jan 21 00:57:09.777000 audit: BPF prog-id=30 op=LOAD Jan 21 00:57:09.777000 audit: BPF prog-id=23 op=UNLOAD Jan 21 00:57:09.777000 audit: BPF prog-id=24 op=UNLOAD Jan 21 00:57:09.777000 audit: BPF prog-id=31 op=LOAD Jan 21 00:57:09.777000 audit: BPF prog-id=15 op=UNLOAD Jan 21 00:57:09.778000 audit: BPF prog-id=32 op=LOAD Jan 21 00:57:09.778000 audit: BPF prog-id=33 op=LOAD Jan 21 00:57:09.778000 audit: BPF prog-id=16 op=UNLOAD Jan 21 00:57:09.778000 audit: BPF prog-id=17 op=UNLOAD Jan 21 00:57:09.778000 audit: BPF prog-id=34 op=LOAD Jan 21 00:57:09.778000 audit: BPF prog-id=21 op=UNLOAD Jan 21 00:57:09.779000 audit: BPF prog-id=35 op=LOAD Jan 21 00:57:09.779000 audit: BPF prog-id=25 op=UNLOAD Jan 21 00:57:09.779000 audit: BPF prog-id=36 op=LOAD Jan 21 00:57:09.779000 audit: BPF prog-id=37 op=LOAD Jan 21 00:57:09.779000 audit: BPF prog-id=26 op=UNLOAD Jan 21 00:57:09.779000 audit: BPF prog-id=27 op=UNLOAD Jan 21 00:57:09.780000 audit: BPF prog-id=38 op=LOAD Jan 21 00:57:09.781000 audit: BPF prog-id=18 op=UNLOAD Jan 21 00:57:09.781000 audit: BPF prog-id=39 op=LOAD Jan 21 00:57:09.781000 audit: BPF prog-id=40 op=LOAD Jan 21 00:57:09.781000 audit: BPF prog-id=19 op=UNLOAD Jan 21 00:57:09.781000 audit: BPF prog-id=20 op=UNLOAD Jan 21 00:57:09.789691 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 21 00:57:09.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:09.790000 audit: BPF prog-id=8 op=UNLOAD Jan 21 00:57:09.790000 audit: BPF prog-id=7 op=UNLOAD Jan 21 00:57:09.791000 audit: BPF prog-id=41 op=LOAD Jan 21 00:57:09.791000 audit: BPF prog-id=42 op=LOAD Jan 21 00:57:09.793339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 21 00:57:09.806396 systemd[1]: Reload requested from client PID 1438 ('systemctl') (unit ensure-sysext.service)... Jan 21 00:57:09.806409 systemd[1]: Reloading... Jan 21 00:57:09.807529 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 21 00:57:09.807557 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 21 00:57:09.807844 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 21 00:57:09.808842 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 21 00:57:09.808891 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 21 00:57:09.824257 systemd-tmpfiles[1439]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 00:57:09.824271 systemd-tmpfiles[1439]: Skipping /boot Jan 21 00:57:09.841345 systemd-tmpfiles[1439]: Detected autofs mount point /boot during canonicalization of boot. Jan 21 00:57:09.841357 systemd-tmpfiles[1439]: Skipping /boot Jan 21 00:57:09.843384 systemd-udevd[1442]: Using default interface naming scheme 'v257'. Jan 21 00:57:09.877180 zram_generator::config[1473]: No configuration found. Jan 21 00:57:10.022179 kernel: mousedev: PS/2 mouse device common for all mice Jan 21 00:57:10.034192 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Jan 21 00:57:10.066183 kernel: ACPI: button: Power Button [PWRF] Jan 21 00:57:10.098842 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 21 00:57:10.099100 systemd[1]: Reloading finished in 292 ms. Jan 21 00:57:10.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.108096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 21 00:57:10.111025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 21 00:57:10.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.120000 audit: BPF prog-id=43 op=LOAD Jan 21 00:57:10.120000 audit: BPF prog-id=35 op=UNLOAD Jan 21 00:57:10.121000 audit: BPF prog-id=44 op=LOAD Jan 21 00:57:10.121000 audit: BPF prog-id=45 op=LOAD Jan 21 00:57:10.121000 audit: BPF prog-id=36 op=UNLOAD Jan 21 00:57:10.121000 audit: BPF prog-id=37 op=UNLOAD Jan 21 00:57:10.121000 audit: BPF prog-id=46 op=LOAD Jan 21 00:57:10.121000 audit: BPF prog-id=47 op=LOAD Jan 21 00:57:10.121000 audit: BPF prog-id=41 op=UNLOAD Jan 21 00:57:10.121000 audit: BPF prog-id=42 op=UNLOAD Jan 21 00:57:10.123000 audit: BPF prog-id=48 op=LOAD Jan 21 00:57:10.123000 audit: BPF prog-id=31 op=UNLOAD Jan 21 00:57:10.123000 audit: BPF prog-id=49 op=LOAD Jan 21 00:57:10.123000 audit: BPF prog-id=50 op=LOAD Jan 21 00:57:10.123000 audit: BPF prog-id=32 op=UNLOAD Jan 21 00:57:10.123000 audit: BPF prog-id=33 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=51 op=LOAD Jan 21 00:57:10.124000 audit: BPF prog-id=38 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=52 op=LOAD Jan 21 00:57:10.124000 audit: BPF prog-id=53 op=LOAD Jan 21 00:57:10.124000 audit: BPF prog-id=39 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=40 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=54 op=LOAD Jan 21 00:57:10.124000 audit: BPF prog-id=34 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=55 op=LOAD Jan 21 00:57:10.124000 audit: BPF prog-id=28 op=UNLOAD Jan 21 00:57:10.124000 audit: BPF prog-id=56 op=LOAD Jan 21 00:57:10.127000 audit: BPF prog-id=57 op=LOAD Jan 21 00:57:10.127000 audit: BPF prog-id=29 op=UNLOAD Jan 21 00:57:10.127000 audit: BPF prog-id=30 op=UNLOAD Jan 21 00:57:10.160592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.163346 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 00:57:10.167255 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 21 00:57:10.167517 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 21 00:57:10.167659 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 21 00:57:10.168838 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 21 00:57:10.170354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:57:10.172650 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 21 00:57:10.175957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 21 00:57:10.184728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 21 00:57:10.190701 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 21 00:57:10.194345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 21 00:57:10.195362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:57:10.195609 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:57:10.198245 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 21 00:57:10.199213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:57:10.202028 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 21 00:57:10.203000 audit: BPF prog-id=58 op=LOAD Jan 21 00:57:10.213002 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 21 00:57:10.210204 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 21 00:57:10.214560 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 21 00:57:10.215569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.223837 kernel: Console: switching to colour dummy device 80x25 Jan 21 00:57:10.224083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 21 00:57:10.227768 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 21 00:57:10.228010 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 21 00:57:10.228031 kernel: [drm] features: -context_init Jan 21 00:57:10.230023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.230189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:57:10.230343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:57:10.230466 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:57:10.236052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 21 00:57:10.236125 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:57:10.236230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.240000 audit[1562]: SYSTEM_BOOT pid=1562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.248199 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 21 00:57:10.258490 kernel: [drm] number of scanouts: 1 Jan 21 00:57:10.256734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.256980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 21 00:57:10.260192 kernel: [drm] number of cap sets: 0 Jan 21 00:57:10.264217 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 21 00:57:10.266206 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 21 00:57:10.266273 kernel: Console: switching to colour frame buffer device 160x50 Jan 21 00:57:10.276190 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 21 00:57:10.277972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 21 00:57:10.280993 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 21 00:57:10.282211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 21 00:57:10.282393 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 21 00:57:10.282435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 21 00:57:10.282504 systemd[1]: Reached target time-set.target - System Time Set. Jan 21 00:57:10.282988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 21 00:57:10.285828 systemd[1]: Finished ensure-sysext.service. Jan 21 00:57:10.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.294858 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 21 00:57:10.295092 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 21 00:57:10.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.297490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 21 00:57:10.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.334019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 21 00:57:10.334340 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 21 00:57:10.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:10.348392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 21 00:57:10.348631 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 21 00:57:10.367000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 21 00:57:10.367000 audit[1598]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd45536d20 a2=420 a3=0 items=0 ppid=1545 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:10.367000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:57:10.368221 augenrules[1598]: No rules Jan 21 00:57:10.379363 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 21 00:57:10.382895 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 00:57:10.383143 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 00:57:10.383585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 21 00:57:10.383746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 21 00:57:10.391643 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 21 00:57:10.394105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 21 00:57:10.394363 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 21 00:57:10.398266 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 21 00:57:10.398538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 21 00:57:10.403410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 21 00:57:10.404462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 21 00:57:10.418216 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 21 00:57:10.419847 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 21 00:57:10.420104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 21 00:57:10.435542 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 21 00:57:10.435617 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 21 00:57:10.451466 systemd-networkd[1556]: lo: Link UP Jan 21 00:57:10.451474 systemd-networkd[1556]: lo: Gained carrier Jan 21 00:57:10.452301 kernel: PTP clock support registered Jan 21 00:57:10.453979 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 21 00:57:10.454281 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 21 00:57:10.455756 systemd-networkd[1556]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:57:10.455764 systemd-networkd[1556]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 21 00:57:10.455969 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 21 00:57:10.457342 systemd[1]: Reached target network.target - Network. Jan 21 00:57:10.461400 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 21 00:57:10.464687 systemd-networkd[1556]: eth0: Link UP Jan 21 00:57:10.465660 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 21 00:57:10.469612 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 21 00:57:10.469957 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 21 00:57:10.475025 systemd-networkd[1556]: eth0: Gained carrier Jan 21 00:57:10.475131 systemd-networkd[1556]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 21 00:57:10.491431 systemd-networkd[1556]: eth0: DHCPv4 address 10.0.0.94/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 21 00:57:10.525528 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 21 00:57:10.543034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:10.575207 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:57:10.577479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:10.582422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:10.613588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 21 00:57:10.613821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:10.617461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 21 00:57:10.742199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 21 00:57:11.144478 ldconfig[1554]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 21 00:57:11.150608 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 21 00:57:11.153751 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 21 00:57:11.178575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 21 00:57:11.180528 systemd[1]: Reached target sysinit.target - System Initialization. Jan 21 00:57:11.181748 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 21 00:57:11.182356 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 21 00:57:11.183220 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 21 00:57:11.184247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 21 00:57:11.184762 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 21 00:57:11.185251 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 21 00:57:11.185767 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 21 00:57:11.186142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 21 00:57:11.186563 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 21 00:57:11.186600 systemd[1]: Reached target paths.target - Path Units. Jan 21 00:57:11.186979 systemd[1]: Reached target timers.target - Timer Units. Jan 21 00:57:11.190658 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 21 00:57:11.194011 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 21 00:57:11.197809 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 21 00:57:11.198832 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 21 00:57:11.200412 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 21 00:57:11.212063 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 21 00:57:11.213139 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 21 00:57:11.214508 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 21 00:57:11.218247 systemd[1]: Reached target sockets.target - Socket Units. Jan 21 00:57:11.218913 systemd[1]: Reached target basic.target - Basic System. Jan 21 00:57:11.219369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 21 00:57:11.219398 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 21 00:57:11.222737 systemd[1]: Starting chronyd.service - NTP client/server... Jan 21 00:57:11.227455 systemd[1]: Starting containerd.service - containerd container runtime... Jan 21 00:57:11.233128 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 21 00:57:11.234775 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 21 00:57:11.240337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 21 00:57:11.246314 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 21 00:57:11.251176 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:11.250414 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 21 00:57:11.254020 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 21 00:57:11.257441 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 21 00:57:11.266398 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 21 00:57:11.270352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 21 00:57:11.274388 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 21 00:57:11.280801 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 21 00:57:11.292802 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 21 00:57:11.294425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 21 00:57:11.294959 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 21 00:57:11.298087 systemd[1]: Starting update-engine.service - Update Engine... Jan 21 00:57:11.301252 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 21 00:57:11.305295 jq[1643]: false Jan 21 00:57:11.305732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 21 00:57:11.308122 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 21 00:57:11.308427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 21 00:57:11.312360 chronyd[1638]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 21 00:57:11.313136 chronyd[1638]: Loaded seccomp filter (level 2) Jan 21 00:57:11.314482 systemd[1]: Started chronyd.service - NTP client/server. Jan 21 00:57:11.316275 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Refreshing passwd entry cache Jan 21 00:57:11.316651 oslogin_cache_refresh[1647]: Refreshing passwd entry cache Jan 21 00:57:11.317613 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 21 00:57:11.319346 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 21 00:57:11.335333 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Failure getting users, quitting Jan 21 00:57:11.335333 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 00:57:11.335333 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Refreshing group entry cache Jan 21 00:57:11.334887 oslogin_cache_refresh[1647]: Failure getting users, quitting Jan 21 00:57:11.334904 oslogin_cache_refresh[1647]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 21 00:57:11.334945 oslogin_cache_refresh[1647]: Refreshing group entry cache Jan 21 00:57:11.343734 extend-filesystems[1644]: Found /dev/vda6 Jan 21 00:57:11.348535 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Failure getting groups, quitting Jan 21 00:57:11.348535 google_oslogin_nss_cache[1647]: oslogin_cache_refresh[1647]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 00:57:11.347994 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 21 00:57:11.345483 oslogin_cache_refresh[1647]: Failure getting groups, quitting Jan 21 00:57:11.345497 oslogin_cache_refresh[1647]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 21 00:57:11.350195 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 21 00:57:11.352484 jq[1654]: true Jan 21 00:57:11.361113 systemd[1]: motdgen.service: Deactivated successfully. Jan 21 00:57:11.361585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 21 00:57:11.370873 update_engine[1653]: I20260121 00:57:11.370508 1653 main.cc:92] Flatcar Update Engine starting Jan 21 00:57:11.372193 extend-filesystems[1644]: Found /dev/vda9 Jan 21 00:57:11.384199 extend-filesystems[1644]: Checking size of /dev/vda9 Jan 21 00:57:11.388476 tar[1660]: linux-amd64/LICENSE Jan 21 00:57:11.391178 tar[1660]: linux-amd64/helm Jan 21 00:57:11.391238 jq[1679]: true Jan 21 00:57:11.396850 dbus-daemon[1641]: [system] SELinux support is enabled Jan 21 00:57:11.397067 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 21 00:57:11.405976 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 21 00:57:11.406019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 21 00:57:11.406197 extend-filesystems[1644]: Resized partition /dev/vda9 Jan 21 00:57:11.408278 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 21 00:57:11.408294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 21 00:57:11.417627 extend-filesystems[1695]: resize2fs 1.47.3 (8-Jul-2025) Jan 21 00:57:11.419601 systemd[1]: Started update-engine.service - Update Engine. Jan 21 00:57:11.420298 update_engine[1653]: I20260121 00:57:11.419765 1653 update_check_scheduler.cc:74] Next update check in 5m48s Jan 21 00:57:11.433465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 21 00:57:11.444176 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 21 00:57:11.493963 systemd-logind[1652]: New seat seat0. Jan 21 00:57:11.501485 systemd-logind[1652]: Watching system buttons on /dev/input/event3 (Power Button) Jan 21 00:57:11.501502 systemd-logind[1652]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 21 00:57:11.502213 systemd[1]: Started systemd-logind.service - User Login Management. Jan 21 00:57:11.635168 locksmithd[1696]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 21 00:57:11.699447 containerd[1672]: time="2026-01-21T00:57:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 21 00:57:11.705274 containerd[1672]: time="2026-01-21T00:57:11.705227168Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 21 00:57:11.723025 containerd[1672]: time="2026-01-21T00:57:11.720901707Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.802µs" Jan 21 00:57:11.723025 containerd[1672]: time="2026-01-21T00:57:11.720932868Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 21 00:57:11.723025 containerd[1672]: time="2026-01-21T00:57:11.720972202Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 21 00:57:11.723025 containerd[1672]: time="2026-01-21T00:57:11.720983082Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 21 00:57:11.727929 sshd_keygen[1687]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742657667Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742705697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742764451Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742774392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742975517Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742989137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.742998641Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 21 00:57:11.743041 containerd[1672]: time="2026-01-21T00:57:11.743006681Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.745654 containerd[1672]: time="2026-01-21T00:57:11.745535425Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.745654 containerd[1672]: time="2026-01-21T00:57:11.745561860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 21 00:57:11.745654 containerd[1672]: time="2026-01-21T00:57:11.745653433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.745832 containerd[1672]: time="2026-01-21T00:57:11.745815931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.746183 containerd[1672]: time="2026-01-21T00:57:11.745843457Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 21 00:57:11.746183 containerd[1672]: time="2026-01-21T00:57:11.745852995Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 21 00:57:11.746183 containerd[1672]: time="2026-01-21T00:57:11.745874374Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 21 00:57:11.746183 containerd[1672]: time="2026-01-21T00:57:11.746060740Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 21 00:57:11.746183 containerd[1672]: time="2026-01-21T00:57:11.746107705Z" level=info msg="metadata content store policy set" policy=shared Jan 21 00:57:11.746387 bash[1710]: Updated "/home/core/.ssh/authorized_keys" Jan 21 00:57:11.748543 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 21 00:57:11.754461 systemd[1]: Starting sshkeys.service... Jan 21 00:57:11.756122 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 21 00:57:11.761422 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 21 00:57:11.781428 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 21 00:57:11.783987 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 21 00:57:11.785679 systemd[1]: issuegen.service: Deactivated successfully. Jan 21 00:57:11.785867 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 21 00:57:11.793317 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 21 00:57:11.798840 containerd[1672]: time="2026-01-21T00:57:11.798796463Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 21 00:57:11.798922 containerd[1672]: time="2026-01-21T00:57:11.798852743Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 00:57:11.798943 containerd[1672]: time="2026-01-21T00:57:11.798925236Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 21 00:57:11.798943 containerd[1672]: time="2026-01-21T00:57:11.798935800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 21 00:57:11.798982 containerd[1672]: time="2026-01-21T00:57:11.798946111Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 21 00:57:11.798982 containerd[1672]: time="2026-01-21T00:57:11.798955846Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 21 00:57:11.798982 containerd[1672]: time="2026-01-21T00:57:11.798967501Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 21 00:57:11.798982 containerd[1672]: time="2026-01-21T00:57:11.798979729Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 21 00:57:11.799048 containerd[1672]: time="2026-01-21T00:57:11.798999115Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 21 00:57:11.799048 containerd[1672]: time="2026-01-21T00:57:11.799011601Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 21 00:57:11.799048 containerd[1672]: time="2026-01-21T00:57:11.799022583Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 21 00:57:11.799048 containerd[1672]: time="2026-01-21T00:57:11.799031567Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 21 00:57:11.799048 containerd[1672]: time="2026-01-21T00:57:11.799039853Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 21 00:57:11.799127 containerd[1672]: time="2026-01-21T00:57:11.799050747Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 21 00:57:11.799391 containerd[1672]: time="2026-01-21T00:57:11.799173111Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 21 00:57:11.799391 containerd[1672]: time="2026-01-21T00:57:11.799192957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 21 00:57:11.799521 containerd[1672]: time="2026-01-21T00:57:11.799423089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 21 00:57:11.801168 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:11.802730 containerd[1672]: time="2026-01-21T00:57:11.802688216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 21 00:57:11.802791 containerd[1672]: time="2026-01-21T00:57:11.802740838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 21 00:57:11.802791 containerd[1672]: time="2026-01-21T00:57:11.802757658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 21 00:57:11.802791 containerd[1672]: time="2026-01-21T00:57:11.802773341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802792388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802808072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802822204Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802833663Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802864412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 21 00:57:11.802933 containerd[1672]: time="2026-01-21T00:57:11.802921380Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 21 00:57:11.803043 containerd[1672]: time="2026-01-21T00:57:11.802938573Z" level=info msg="Start snapshots syncer" Jan 21 00:57:11.803043 containerd[1672]: time="2026-01-21T00:57:11.802962628Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 21 00:57:11.804711 containerd[1672]: time="2026-01-21T00:57:11.803867396Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 21 00:57:11.804711 containerd[1672]: time="2026-01-21T00:57:11.803927725Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.803984219Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804099840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804124019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804137523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804182405Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804199935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804211309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804224775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804236934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804251282Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804280612Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804297350Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 21 00:57:11.804851 containerd[1672]: time="2026-01-21T00:57:11.804308148Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804318081Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804329182Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804341958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804355250Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804370657Z" level=info msg="runtime interface created" Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804375599Z" level=info msg="created NRI interface" Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804387030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804398767Z" level=info msg="Connect containerd service" Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.804422507Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 21 00:57:11.805060 containerd[1672]: time="2026-01-21T00:57:11.805031879Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 00:57:11.819759 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 21 00:57:11.825411 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 21 00:57:11.833393 systemd-networkd[1556]: eth0: Gained IPv6LL Jan 21 00:57:11.833842 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 21 00:57:11.836806 systemd[1]: Reached target getty.target - Login Prompts. Jan 21 00:57:11.843537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 21 00:57:11.845677 systemd[1]: Reached target network-online.target - Network is Online. Jan 21 00:57:11.848595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:11.852108 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 21 00:57:11.909297 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 21 00:57:11.938211 containerd[1672]: time="2026-01-21T00:57:11.938168752Z" level=info msg="Start subscribing containerd event" Jan 21 00:57:11.938312 containerd[1672]: time="2026-01-21T00:57:11.938222803Z" level=info msg="Start recovering state" Jan 21 00:57:11.938333 containerd[1672]: time="2026-01-21T00:57:11.938320230Z" level=info msg="Start event monitor" Jan 21 00:57:11.938351 containerd[1672]: time="2026-01-21T00:57:11.938331873Z" level=info msg="Start cni network conf syncer for default" Jan 21 00:57:11.938351 containerd[1672]: time="2026-01-21T00:57:11.938338989Z" level=info msg="Start streaming server" Jan 21 00:57:11.938351 containerd[1672]: time="2026-01-21T00:57:11.938346589Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 21 00:57:11.938399 containerd[1672]: time="2026-01-21T00:57:11.938354039Z" level=info msg="runtime interface starting up..." Jan 21 00:57:11.938399 containerd[1672]: time="2026-01-21T00:57:11.938359408Z" level=info msg="starting plugins..." Jan 21 00:57:11.938399 containerd[1672]: time="2026-01-21T00:57:11.938371906Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 21 00:57:11.938885 containerd[1672]: time="2026-01-21T00:57:11.938866858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 21 00:57:11.939296 containerd[1672]: time="2026-01-21T00:57:11.939232979Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 21 00:57:11.939889 containerd[1672]: time="2026-01-21T00:57:11.939482164Z" level=info msg="containerd successfully booted in 0.241674s" Jan 21 00:57:11.940362 systemd[1]: Started containerd.service - containerd container runtime. Jan 21 00:57:12.105185 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 21 00:57:12.133267 extend-filesystems[1695]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 21 00:57:12.133267 extend-filesystems[1695]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 21 00:57:12.133267 extend-filesystems[1695]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 21 00:57:12.141870 extend-filesystems[1644]: Resized filesystem in /dev/vda9 Jan 21 00:57:12.135251 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 21 00:57:12.135660 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 21 00:57:12.200194 tar[1660]: linux-amd64/README.md Jan 21 00:57:12.213591 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 21 00:57:12.266226 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:12.826191 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:13.145810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:13.155666 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:57:13.906176 kubelet[1781]: E0121 00:57:13.906064 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:57:13.908429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:57:13.908556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:57:13.908928 systemd[1]: kubelet.service: Consumed 1.019s CPU time, 266.4M memory peak. Jan 21 00:57:14.278179 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:14.840198 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:18.288250 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:18.296394 coreos-metadata[1640]: Jan 21 00:57:18.296 WARN failed to locate config-drive, using the metadata service API instead Jan 21 00:57:18.315541 coreos-metadata[1640]: Jan 21 00:57:18.315 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 21 00:57:18.773806 coreos-metadata[1640]: Jan 21 00:57:18.773 INFO Fetch successful Jan 21 00:57:18.774040 coreos-metadata[1640]: Jan 21 00:57:18.774 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 21 00:57:18.849192 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 21 00:57:18.856177 coreos-metadata[1737]: Jan 21 00:57:18.855 WARN failed to locate config-drive, using the metadata service API instead Jan 21 00:57:18.867906 coreos-metadata[1737]: Jan 21 00:57:18.867 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 21 00:57:19.040553 coreos-metadata[1640]: Jan 21 00:57:19.040 INFO Fetch successful Jan 21 00:57:19.040553 coreos-metadata[1640]: Jan 21 00:57:19.040 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 21 00:57:19.151971 coreos-metadata[1737]: Jan 21 00:57:19.151 INFO Fetch successful Jan 21 00:57:19.151971 coreos-metadata[1737]: Jan 21 00:57:19.151 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 21 00:57:19.305119 coreos-metadata[1640]: Jan 21 00:57:19.304 INFO Fetch successful Jan 21 00:57:19.305119 coreos-metadata[1640]: Jan 21 00:57:19.305 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 21 00:57:19.418713 coreos-metadata[1737]: Jan 21 00:57:19.418 INFO Fetch successful Jan 21 00:57:19.421166 unknown[1737]: wrote ssh authorized keys file for user: core Jan 21 00:57:19.458317 coreos-metadata[1640]: Jan 21 00:57:19.458 INFO Fetch successful Jan 21 00:57:19.458317 coreos-metadata[1640]: Jan 21 00:57:19.458 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 21 00:57:19.460996 update-ssh-keys[1798]: Updated "/home/core/.ssh/authorized_keys" Jan 21 00:57:19.461415 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 21 00:57:19.463839 systemd[1]: Finished sshkeys.service. Jan 21 00:57:19.581333 coreos-metadata[1640]: Jan 21 00:57:19.581 INFO Fetch successful Jan 21 00:57:19.581333 coreos-metadata[1640]: Jan 21 00:57:19.581 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 21 00:57:19.706410 coreos-metadata[1640]: Jan 21 00:57:19.706 INFO Fetch successful Jan 21 00:57:19.733067 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 21 00:57:19.733529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 21 00:57:19.733660 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 21 00:57:19.736246 systemd[1]: Startup finished in 3.493s (kernel) + 12.346s (initrd) + 11.743s (userspace) = 27.583s. Jan 21 00:57:21.283144 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 21 00:57:21.284279 systemd[1]: Started sshd@0-10.0.0.94:22-4.153.228.146:56614.service - OpenSSH per-connection server daemon (4.153.228.146:56614). Jan 21 00:57:21.859194 sshd[1807]: Accepted publickey for core from 4.153.228.146 port 56614 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:21.864492 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:21.871901 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 21 00:57:21.873298 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 21 00:57:21.879132 systemd-logind[1652]: New session 1 of user core. Jan 21 00:57:21.899831 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 21 00:57:21.904026 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 21 00:57:21.924976 (systemd)[1813]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:21.930635 systemd-logind[1652]: New session 2 of user core. Jan 21 00:57:22.058399 systemd[1813]: Queued start job for default target default.target. Jan 21 00:57:22.080419 systemd[1813]: Created slice app.slice - User Application Slice. Jan 21 00:57:22.080541 systemd[1813]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 21 00:57:22.080557 systemd[1813]: Reached target paths.target - Paths. Jan 21 00:57:22.080618 systemd[1813]: Reached target timers.target - Timers. Jan 21 00:57:22.082955 systemd[1813]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 21 00:57:22.083992 systemd[1813]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 21 00:57:22.101342 systemd[1813]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 21 00:57:22.101425 systemd[1813]: Reached target sockets.target - Sockets. Jan 21 00:57:22.103467 systemd[1813]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 21 00:57:22.103554 systemd[1813]: Reached target basic.target - Basic System. Jan 21 00:57:22.103608 systemd[1813]: Reached target default.target - Main User Target. Jan 21 00:57:22.103634 systemd[1813]: Startup finished in 164ms. Jan 21 00:57:22.103871 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 21 00:57:22.112957 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 21 00:57:22.428366 systemd[1]: Started sshd@1-10.0.0.94:22-4.153.228.146:56628.service - OpenSSH per-connection server daemon (4.153.228.146:56628). Jan 21 00:57:22.974242 sshd[1827]: Accepted publickey for core from 4.153.228.146 port 56628 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:22.975481 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:22.980179 systemd-logind[1652]: New session 3 of user core. Jan 21 00:57:22.991605 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 21 00:57:23.265436 sshd[1831]: Connection closed by 4.153.228.146 port 56628 Jan 21 00:57:23.266039 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 21 00:57:23.270107 systemd[1]: sshd@1-10.0.0.94:22-4.153.228.146:56628.service: Deactivated successfully. Jan 21 00:57:23.271686 systemd[1]: session-3.scope: Deactivated successfully. Jan 21 00:57:23.272877 systemd-logind[1652]: Session 3 logged out. Waiting for processes to exit. Jan 21 00:57:23.274131 systemd-logind[1652]: Removed session 3. Jan 21 00:57:23.378664 systemd[1]: Started sshd@2-10.0.0.94:22-4.153.228.146:56636.service - OpenSSH per-connection server daemon (4.153.228.146:56636). Jan 21 00:57:23.922318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 21 00:57:23.924318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:23.925551 sshd[1837]: Accepted publickey for core from 4.153.228.146 port 56636 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:23.927480 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:23.932167 systemd-logind[1652]: New session 4 of user core. Jan 21 00:57:23.933027 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 21 00:57:24.044597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:24.054606 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:57:24.225556 sshd[1844]: Connection closed by 4.153.228.146 port 56636 Jan 21 00:57:24.227436 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 21 00:57:24.232763 systemd-logind[1652]: Session 4 logged out. Waiting for processes to exit. Jan 21 00:57:24.233187 systemd[1]: sshd@2-10.0.0.94:22-4.153.228.146:56636.service: Deactivated successfully. Jan 21 00:57:24.235428 systemd[1]: session-4.scope: Deactivated successfully. Jan 21 00:57:24.237748 systemd-logind[1652]: Removed session 4. Jan 21 00:57:24.342647 systemd[1]: Started sshd@3-10.0.0.94:22-4.153.228.146:56638.service - OpenSSH per-connection server daemon (4.153.228.146:56638). Jan 21 00:57:24.557036 kubelet[1850]: E0121 00:57:24.529005 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:57:24.532314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:57:24.532438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:57:24.532937 systemd[1]: kubelet.service: Consumed 167ms CPU time, 110.9M memory peak. Jan 21 00:57:24.861273 sshd[1860]: Accepted publickey for core from 4.153.228.146 port 56638 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:24.862611 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:24.867375 systemd-logind[1652]: New session 5 of user core. Jan 21 00:57:24.875697 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 21 00:57:25.149598 sshd[1867]: Connection closed by 4.153.228.146 port 56638 Jan 21 00:57:25.150301 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Jan 21 00:57:25.155098 systemd[1]: sshd@3-10.0.0.94:22-4.153.228.146:56638.service: Deactivated successfully. Jan 21 00:57:25.157237 systemd[1]: session-5.scope: Deactivated successfully. Jan 21 00:57:25.158861 systemd-logind[1652]: Session 5 logged out. Waiting for processes to exit. Jan 21 00:57:25.159863 systemd-logind[1652]: Removed session 5. Jan 21 00:57:25.271813 systemd[1]: Started sshd@4-10.0.0.94:22-4.153.228.146:37180.service - OpenSSH per-connection server daemon (4.153.228.146:37180). Jan 21 00:57:25.803138 sshd[1873]: Accepted publickey for core from 4.153.228.146 port 37180 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:25.804277 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:25.808353 systemd-logind[1652]: New session 6 of user core. Jan 21 00:57:25.815380 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 21 00:57:26.023540 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 21 00:57:26.023799 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:57:26.037106 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 21 00:57:26.136731 sshd[1877]: Connection closed by 4.153.228.146 port 37180 Jan 21 00:57:26.137637 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Jan 21 00:57:26.142170 systemd[1]: sshd@4-10.0.0.94:22-4.153.228.146:37180.service: Deactivated successfully. Jan 21 00:57:26.144111 systemd[1]: session-6.scope: Deactivated successfully. Jan 21 00:57:26.145502 systemd-logind[1652]: Session 6 logged out. Waiting for processes to exit. Jan 21 00:57:26.146600 systemd-logind[1652]: Removed session 6. Jan 21 00:57:26.247424 systemd[1]: Started sshd@5-10.0.0.94:22-4.153.228.146:37182.service - OpenSSH per-connection server daemon (4.153.228.146:37182). Jan 21 00:57:26.786815 sshd[1885]: Accepted publickey for core from 4.153.228.146 port 37182 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:26.788381 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:26.792390 systemd-logind[1652]: New session 7 of user core. Jan 21 00:57:26.802742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 21 00:57:26.992853 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 21 00:57:26.993108 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:57:26.996831 sudo[1891]: pam_unix(sudo:session): session closed for user root Jan 21 00:57:27.002841 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 21 00:57:27.003101 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:57:27.010522 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 21 00:57:27.047000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 00:57:27.048054 augenrules[1915]: No rules Jan 21 00:57:27.048422 kernel: kauditd_printk_skb: 179 callbacks suppressed Jan 21 00:57:27.048469 kernel: audit: type=1305 audit(1768957047.047:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 21 00:57:27.047000 audit[1915]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdceb12dc0 a2=420 a3=0 items=0 ppid=1896 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:27.051257 systemd[1]: audit-rules.service: Deactivated successfully. Jan 21 00:57:27.051550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 21 00:57:27.053667 kernel: audit: type=1300 audit(1768957047.047:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdceb12dc0 a2=420 a3=0 items=0 ppid=1896 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:27.054589 sudo[1890]: pam_unix(sudo:session): session closed for user root Jan 21 00:57:27.047000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:57:27.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.058374 kernel: audit: type=1327 audit(1768957047.047:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 21 00:57:27.058412 kernel: audit: type=1130 audit(1768957047.049:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.060382 kernel: audit: type=1131 audit(1768957047.049:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.060427 kernel: audit: type=1106 audit(1768957047.054:228): pid=1890 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.054000 audit[1890]: USER_END pid=1890 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.054000 audit[1890]: CRED_DISP pid=1890 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.064613 kernel: audit: type=1104 audit(1768957047.054:229): pid=1890 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.155099 sshd[1889]: Connection closed by 4.153.228.146 port 37182 Jan 21 00:57:27.155638 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 21 00:57:27.157000 audit[1885]: USER_END pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.157000 audit[1885]: CRED_DISP pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.162819 systemd[1]: sshd@5-10.0.0.94:22-4.153.228.146:37182.service: Deactivated successfully. Jan 21 00:57:27.164843 kernel: audit: type=1106 audit(1768957047.157:230): pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.164898 kernel: audit: type=1104 audit(1768957047.157:231): pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.165370 systemd[1]: session-7.scope: Deactivated successfully. Jan 21 00:57:27.165394 systemd-logind[1652]: Session 7 logged out. Waiting for processes to exit. Jan 21 00:57:27.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.94:22-4.153.228.146:37182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.168215 kernel: audit: type=1131 audit(1768957047.162:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.94:22-4.153.228.146:37182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.168500 systemd-logind[1652]: Removed session 7. Jan 21 00:57:27.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-4.153.228.146:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:27.268888 systemd[1]: Started sshd@6-10.0.0.94:22-4.153.228.146:37192.service - OpenSSH per-connection server daemon (4.153.228.146:37192). Jan 21 00:57:27.810000 audit[1924]: USER_ACCT pid=1924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.811227 sshd[1924]: Accepted publickey for core from 4.153.228.146 port 37192 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 00:57:27.812000 audit[1924]: CRED_ACQ pid=1924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.812000 audit[1924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdcb6bd10 a2=3 a3=0 items=0 ppid=1 pid=1924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:27.812000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:57:27.812780 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:57:27.817104 systemd-logind[1652]: New session 8 of user core. Jan 21 00:57:27.829635 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 21 00:57:27.832000 audit[1924]: USER_START pid=1924 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:27.834000 audit[1928]: CRED_ACQ pid=1928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:57:28.016975 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 21 00:57:28.017248 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 21 00:57:28.016000 audit[1929]: USER_ACCT pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:28.016000 audit[1929]: CRED_REFR pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:28.017000 audit[1929]: USER_START pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:57:28.478714 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 21 00:57:28.507644 (dockerd)[1948]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 21 00:57:28.842794 dockerd[1948]: time="2026-01-21T00:57:28.842306658Z" level=info msg="Starting up" Jan 21 00:57:28.845557 dockerd[1948]: time="2026-01-21T00:57:28.845523589Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 21 00:57:28.857968 dockerd[1948]: time="2026-01-21T00:57:28.857851941Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 21 00:57:28.889496 systemd[1]: var-lib-docker-metacopy\x2dcheck1534340606-merged.mount: Deactivated successfully. Jan 21 00:57:28.918084 dockerd[1948]: time="2026-01-21T00:57:28.918021632Z" level=info msg="Loading containers: start." Jan 21 00:57:28.932232 kernel: Initializing XFRM netlink socket Jan 21 00:57:28.994000 audit[1997]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:28.994000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc4e3955c0 a2=0 a3=0 items=0 ppid=1948 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:28.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 00:57:28.996000 audit[1999]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:28.996000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcdfaf22c0 a2=0 a3=0 items=0 ppid=1948 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:28.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 00:57:28.998000 audit[2001]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:28.998000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceececce0 a2=0 a3=0 items=0 ppid=1948 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:28.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 00:57:29.000000 audit[2003]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.000000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc237fdb10 a2=0 a3=0 items=0 ppid=1948 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 00:57:29.002000 audit[2005]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.002000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1fa487e0 a2=0 a3=0 items=0 ppid=1948 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.002000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 00:57:29.004000 audit[2007]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.004000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe82217f0 a2=0 a3=0 items=0 ppid=1948 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:57:29.006000 audit[2009]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.006000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb2e434d0 a2=0 a3=0 items=0 ppid=1948 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:57:29.008000 audit[2011]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.008000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd14b459f0 a2=0 a3=0 items=0 ppid=1948 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 00:57:29.045000 audit[2014]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.045000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe669c2b80 a2=0 a3=0 items=0 ppid=1948 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 21 00:57:29.047000 audit[2016]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.047000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffb7496470 a2=0 a3=0 items=0 ppid=1948 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 00:57:29.052000 audit[2018]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.052000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd616fa260 a2=0 a3=0 items=0 ppid=1948 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 00:57:29.054000 audit[2020]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.054000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffd7e0b460 a2=0 a3=0 items=0 ppid=1948 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:57:29.056000 audit[2022]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.056000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe1119e220 a2=0 a3=0 items=0 ppid=1948 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 00:57:29.095000 audit[2052]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.095000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc0c2de280 a2=0 a3=0 items=0 ppid=1948 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 21 00:57:29.098000 audit[2054]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.098000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd874a7e20 a2=0 a3=0 items=0 ppid=1948 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 21 00:57:29.100000 audit[2056]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.100000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef5f99070 a2=0 a3=0 items=0 ppid=1948 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 21 00:57:29.102000 audit[2058]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.102000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6d4f4c60 a2=0 a3=0 items=0 ppid=1948 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 21 00:57:29.104000 audit[2060]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.104000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3d8d5cd0 a2=0 a3=0 items=0 ppid=1948 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 21 00:57:29.106000 audit[2062]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.106000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdef607e10 a2=0 a3=0 items=0 ppid=1948 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:57:29.107000 audit[2064]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.107000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff86670c80 a2=0 a3=0 items=0 ppid=1948 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:57:29.109000 audit[2066]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.109000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff56f3cc30 a2=0 a3=0 items=0 ppid=1948 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 21 00:57:29.111000 audit[2068]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.111000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd7af529d0 a2=0 a3=0 items=0 ppid=1948 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 21 00:57:29.113000 audit[2070]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.113000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4063df70 a2=0 a3=0 items=0 ppid=1948 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 21 00:57:29.115000 audit[2072]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.115000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffface86d70 a2=0 a3=0 items=0 ppid=1948 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.115000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 21 00:57:29.117000 audit[2074]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.117000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffefa2e3750 a2=0 a3=0 items=0 ppid=1948 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.117000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 21 00:57:29.119000 audit[2076]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.119000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffd57dff20 a2=0 a3=0 items=0 ppid=1948 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 21 00:57:29.124000 audit[2081]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.124000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3caed7e0 a2=0 a3=0 items=0 ppid=1948 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 00:57:29.126000 audit[2083]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.126000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffaa005150 a2=0 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 00:57:29.127000 audit[2085]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.127000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe658a49d0 a2=0 a3=0 items=0 ppid=1948 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 00:57:29.129000 audit[2087]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.129000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda9d03950 a2=0 a3=0 items=0 ppid=1948 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 21 00:57:29.131000 audit[2089]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.131000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd9f412e0 a2=0 a3=0 items=0 ppid=1948 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 21 00:57:29.133000 audit[2091]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:29.133000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe828db0e0 a2=0 a3=0 items=0 ppid=1948 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 21 00:57:29.162000 audit[2096]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.162000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffc18b1b00 a2=0 a3=0 items=0 ppid=1948 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 21 00:57:29.165000 audit[2098]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.165000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe4ca5e730 a2=0 a3=0 items=0 ppid=1948 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 21 00:57:29.172000 audit[2106]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.172000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd0ca09bc0 a2=0 a3=0 items=0 ppid=1948 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 21 00:57:29.187000 audit[2112]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.187000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcadad8c60 a2=0 a3=0 items=0 ppid=1948 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 21 00:57:29.190000 audit[2114]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.190000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcf646e420 a2=0 a3=0 items=0 ppid=1948 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 21 00:57:29.192000 audit[2116]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.192000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc39ec8d30 a2=0 a3=0 items=0 ppid=1948 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.192000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 21 00:57:29.194000 audit[2118]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.194000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec79179c0 a2=0 a3=0 items=0 ppid=1948 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 21 00:57:29.196000 audit[2120]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:29.196000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce5923ff0 a2=0 a3=0 items=0 ppid=1948 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:29.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 21 00:57:29.196675 systemd-networkd[1556]: docker0: Link UP Jan 21 00:57:29.203998 dockerd[1948]: time="2026-01-21T00:57:29.203943584Z" level=info msg="Loading containers: done." Jan 21 00:57:29.216073 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2171596075-merged.mount: Deactivated successfully. Jan 21 00:57:29.229888 dockerd[1948]: time="2026-01-21T00:57:29.229384474Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 21 00:57:29.229888 dockerd[1948]: time="2026-01-21T00:57:29.229534092Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 21 00:57:29.229888 dockerd[1948]: time="2026-01-21T00:57:29.229645022Z" level=info msg="Initializing buildkit" Jan 21 00:57:29.265970 dockerd[1948]: time="2026-01-21T00:57:29.262898977Z" level=info msg="Completed buildkit initialization" Jan 21 00:57:29.273136 dockerd[1948]: time="2026-01-21T00:57:29.273086388Z" level=info msg="Daemon has completed initialization" Jan 21 00:57:29.273398 dockerd[1948]: time="2026-01-21T00:57:29.273238271Z" level=info msg="API listen on /run/docker.sock" Jan 21 00:57:29.273652 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 21 00:57:29.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:30.649591 containerd[1672]: time="2026-01-21T00:57:30.649535619Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 21 00:57:31.408773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19948401.mount: Deactivated successfully. Jan 21 00:57:32.250204 containerd[1672]: time="2026-01-21T00:57:32.250140319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:32.251566 containerd[1672]: time="2026-01-21T00:57:32.251329438Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 21 00:57:32.253283 containerd[1672]: time="2026-01-21T00:57:32.253255839Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:32.256118 containerd[1672]: time="2026-01-21T00:57:32.256094834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:32.256777 containerd[1672]: time="2026-01-21T00:57:32.256753620Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.607034636s" Jan 21 00:57:32.256820 containerd[1672]: time="2026-01-21T00:57:32.256782757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 21 00:57:32.257494 containerd[1672]: time="2026-01-21T00:57:32.257447739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 21 00:57:33.696766 containerd[1672]: time="2026-01-21T00:57:33.696698722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:33.699103 containerd[1672]: time="2026-01-21T00:57:33.699053140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 21 00:57:33.700479 containerd[1672]: time="2026-01-21T00:57:33.700451301Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:33.705496 containerd[1672]: time="2026-01-21T00:57:33.704063898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:33.705626 containerd[1672]: time="2026-01-21T00:57:33.705582208Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.448113014s" Jan 21 00:57:33.705626 containerd[1672]: time="2026-01-21T00:57:33.705606964Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 21 00:57:33.706171 containerd[1672]: time="2026-01-21T00:57:33.706139420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 21 00:57:34.672514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 21 00:57:34.674314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:34.800387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:34.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:34.801571 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 21 00:57:34.801619 kernel: audit: type=1130 audit(1768957054.800:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:34.813535 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:57:34.851734 kubelet[2234]: E0121 00:57:34.851621 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:57:34.855588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:57:34.855716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:57:34.856556 systemd[1]: kubelet.service: Consumed 133ms CPU time, 109.4M memory peak. Jan 21 00:57:34.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:57:34.860187 kernel: audit: type=1131 audit(1768957054.856:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:57:35.102646 chronyd[1638]: Selected source PHC0 Jan 21 00:57:35.438667 containerd[1672]: time="2026-01-21T00:57:35.438617786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:35.441686 containerd[1672]: time="2026-01-21T00:57:35.441629752Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 21 00:57:35.443557 containerd[1672]: time="2026-01-21T00:57:35.443512573Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:35.450249 containerd[1672]: time="2026-01-21T00:57:35.450184560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:35.451328 containerd[1672]: time="2026-01-21T00:57:35.451283352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.745108352s" Jan 21 00:57:35.451384 containerd[1672]: time="2026-01-21T00:57:35.451328937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 21 00:57:35.452142 containerd[1672]: time="2026-01-21T00:57:35.451830725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 21 00:57:37.877224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598197099.mount: Deactivated successfully. Jan 21 00:57:38.296672 containerd[1672]: time="2026-01-21T00:57:38.296591943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:38.299021 containerd[1672]: time="2026-01-21T00:57:38.298861346Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20341948" Jan 21 00:57:38.301274 containerd[1672]: time="2026-01-21T00:57:38.301245283Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:38.304318 containerd[1672]: time="2026-01-21T00:57:38.304292626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:38.304799 containerd[1672]: time="2026-01-21T00:57:38.304773670Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.852908568s" Jan 21 00:57:38.304866 containerd[1672]: time="2026-01-21T00:57:38.304854603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 21 00:57:38.305508 containerd[1672]: time="2026-01-21T00:57:38.305488195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 21 00:57:39.073721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303151656.mount: Deactivated successfully. Jan 21 00:57:39.735880 containerd[1672]: time="2026-01-21T00:57:39.735831207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:39.738383 containerd[1672]: time="2026-01-21T00:57:39.738356255Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=84716" Jan 21 00:57:39.740412 containerd[1672]: time="2026-01-21T00:57:39.740382361Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:39.743892 containerd[1672]: time="2026-01-21T00:57:39.743860114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:39.744820 containerd[1672]: time="2026-01-21T00:57:39.744796935Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.439284836s" Jan 21 00:57:39.744896 containerd[1672]: time="2026-01-21T00:57:39.744885510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 21 00:57:39.745554 containerd[1672]: time="2026-01-21T00:57:39.745539108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 21 00:57:40.311941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465569830.mount: Deactivated successfully. Jan 21 00:57:40.324861 containerd[1672]: time="2026-01-21T00:57:40.324816971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:57:40.327187 containerd[1672]: time="2026-01-21T00:57:40.327161336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 21 00:57:40.328727 containerd[1672]: time="2026-01-21T00:57:40.328632632Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:57:40.331271 containerd[1672]: time="2026-01-21T00:57:40.331228323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 21 00:57:40.331850 containerd[1672]: time="2026-01-21T00:57:40.331708764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.076202ms" Jan 21 00:57:40.331850 containerd[1672]: time="2026-01-21T00:57:40.331736875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 21 00:57:40.332462 containerd[1672]: time="2026-01-21T00:57:40.332442517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 21 00:57:41.022905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227147275.mount: Deactivated successfully. Jan 21 00:57:44.099178 containerd[1672]: time="2026-01-21T00:57:44.098741704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:44.100483 containerd[1672]: time="2026-01-21T00:57:44.100261248Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 21 00:57:44.102102 containerd[1672]: time="2026-01-21T00:57:44.102078376Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:44.104960 containerd[1672]: time="2026-01-21T00:57:44.104938265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:57:44.105801 containerd[1672]: time="2026-01-21T00:57:44.105775987Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.773256014s" Jan 21 00:57:44.105871 containerd[1672]: time="2026-01-21T00:57:44.105861041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 21 00:57:44.924350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 21 00:57:44.930264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:45.082340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:45.086217 kernel: audit: type=1130 audit(1768957065.081:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:45.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:45.092705 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 21 00:57:45.202142 kubelet[2386]: E0121 00:57:45.202039 2386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 21 00:57:45.204741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 21 00:57:45.204962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 21 00:57:45.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:57:45.205583 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.2M memory peak. Jan 21 00:57:45.209437 kernel: audit: type=1131 audit(1768957065.204:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:57:47.516935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:47.517078 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.2M memory peak. Jan 21 00:57:47.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:47.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:47.522981 kernel: audit: type=1130 audit(1768957067.515:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:47.523028 kernel: audit: type=1131 audit(1768957067.515:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:47.525397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:47.550958 systemd[1]: Reload requested from client PID 2400 ('systemctl') (unit session-8.scope)... Jan 21 00:57:47.550972 systemd[1]: Reloading... Jan 21 00:57:47.650174 zram_generator::config[2449]: No configuration found. Jan 21 00:57:47.835792 systemd[1]: Reloading finished in 284 ms. Jan 21 00:57:47.874178 kernel: audit: type=1334 audit(1768957067.868:289): prog-id=63 op=LOAD Jan 21 00:57:47.874255 kernel: audit: type=1334 audit(1768957067.868:290): prog-id=48 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=63 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=48 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=64 op=LOAD Jan 21 00:57:47.878173 kernel: audit: type=1334 audit(1768957067.868:291): prog-id=64 op=LOAD Jan 21 00:57:47.878221 kernel: audit: type=1334 audit(1768957067.868:292): prog-id=65 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=65 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=49 op=UNLOAD Jan 21 00:57:47.881055 kernel: audit: type=1334 audit(1768957067.868:293): prog-id=49 op=UNLOAD Jan 21 00:57:47.881100 kernel: audit: type=1334 audit(1768957067.868:294): prog-id=50 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=50 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=66 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=51 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=67 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=68 op=LOAD Jan 21 00:57:47.868000 audit: BPF prog-id=52 op=UNLOAD Jan 21 00:57:47.868000 audit: BPF prog-id=53 op=UNLOAD Jan 21 00:57:47.869000 audit: BPF prog-id=69 op=LOAD Jan 21 00:57:47.869000 audit: BPF prog-id=43 op=UNLOAD Jan 21 00:57:47.869000 audit: BPF prog-id=70 op=LOAD Jan 21 00:57:47.869000 audit: BPF prog-id=71 op=LOAD Jan 21 00:57:47.869000 audit: BPF prog-id=44 op=UNLOAD Jan 21 00:57:47.869000 audit: BPF prog-id=45 op=UNLOAD Jan 21 00:57:47.869000 audit: BPF prog-id=72 op=LOAD Jan 21 00:57:47.869000 audit: BPF prog-id=73 op=LOAD Jan 21 00:57:47.869000 audit: BPF prog-id=46 op=UNLOAD Jan 21 00:57:47.869000 audit: BPF prog-id=47 op=UNLOAD Jan 21 00:57:47.870000 audit: BPF prog-id=74 op=LOAD Jan 21 00:57:47.870000 audit: BPF prog-id=59 op=UNLOAD Jan 21 00:57:47.872000 audit: BPF prog-id=75 op=LOAD Jan 21 00:57:47.872000 audit: BPF prog-id=54 op=UNLOAD Jan 21 00:57:47.872000 audit: BPF prog-id=76 op=LOAD Jan 21 00:57:47.872000 audit: BPF prog-id=58 op=UNLOAD Jan 21 00:57:47.873000 audit: BPF prog-id=77 op=LOAD Jan 21 00:57:47.873000 audit: BPF prog-id=55 op=UNLOAD Jan 21 00:57:47.873000 audit: BPF prog-id=78 op=LOAD Jan 21 00:57:47.873000 audit: BPF prog-id=79 op=LOAD Jan 21 00:57:47.873000 audit: BPF prog-id=56 op=UNLOAD Jan 21 00:57:47.873000 audit: BPF prog-id=57 op=UNLOAD Jan 21 00:57:47.875000 audit: BPF prog-id=80 op=LOAD Jan 21 00:57:47.875000 audit: BPF prog-id=60 op=UNLOAD Jan 21 00:57:47.875000 audit: BPF prog-id=81 op=LOAD Jan 21 00:57:47.875000 audit: BPF prog-id=82 op=LOAD Jan 21 00:57:47.875000 audit: BPF prog-id=61 op=UNLOAD Jan 21 00:57:47.875000 audit: BPF prog-id=62 op=UNLOAD Jan 21 00:57:47.890839 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 21 00:57:47.890912 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 21 00:57:47.891171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:47.891218 systemd[1]: kubelet.service: Consumed 93ms CPU time, 98.7M memory peak. Jan 21 00:57:47.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 21 00:57:47.892466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:50.943297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:50.948870 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 21 00:57:50.948932 kernel: audit: type=1130 audit(1768957070.943:330): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:50.954519 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 00:57:50.988464 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:57:50.988464 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 00:57:50.988464 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:57:50.988964 kubelet[2500]: I0121 00:57:50.988542 2500 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:57:51.422951 kubelet[2500]: I0121 00:57:51.422904 2500 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 00:57:51.422951 kubelet[2500]: I0121 00:57:51.422938 2500 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:57:51.423234 kubelet[2500]: I0121 00:57:51.423208 2500 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 00:57:51.470090 kubelet[2500]: I0121 00:57:51.468770 2500 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 00:57:51.472253 kubelet[2500]: E0121 00:57:51.471053 2500 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 00:57:51.479246 kubelet[2500]: I0121 00:57:51.479227 2500 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:57:51.482777 kubelet[2500]: I0121 00:57:51.482753 2500 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 00:57:51.483017 kubelet[2500]: I0121 00:57:51.482991 2500 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:57:51.483189 kubelet[2500]: I0121 00:57:51.483018 2500 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-1ed4874c6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:57:51.483189 kubelet[2500]: I0121 00:57:51.483188 2500 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:57:51.483311 kubelet[2500]: I0121 00:57:51.483196 2500 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 00:57:51.483311 kubelet[2500]: I0121 00:57:51.483308 2500 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:57:51.488506 kubelet[2500]: I0121 00:57:51.488486 2500 kubelet.go:480] "Attempting to sync node with API server" Jan 21 00:57:51.488506 kubelet[2500]: I0121 00:57:51.488505 2500 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:57:51.488620 kubelet[2500]: I0121 00:57:51.488605 2500 kubelet.go:386] "Adding apiserver pod source" Jan 21 00:57:51.488620 kubelet[2500]: I0121 00:57:51.488619 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:57:51.509527 kubelet[2500]: E0121 00:57:51.509419 2500 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-1ed4874c6e&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 00:57:51.509838 kubelet[2500]: E0121 00:57:51.509806 2500 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 00:57:51.509925 kubelet[2500]: I0121 00:57:51.509893 2500 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 00:57:51.510689 kubelet[2500]: I0121 00:57:51.510655 2500 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 00:57:51.512859 kubelet[2500]: W0121 00:57:51.512828 2500 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 21 00:57:51.515464 kubelet[2500]: I0121 00:57:51.515442 2500 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 00:57:51.515532 kubelet[2500]: I0121 00:57:51.515485 2500 server.go:1289] "Started kubelet" Jan 21 00:57:51.518187 kubelet[2500]: I0121 00:57:51.517690 2500 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:57:51.518888 kubelet[2500]: I0121 00:57:51.518872 2500 server.go:317] "Adding debug handlers to kubelet server" Jan 21 00:57:51.521804 kubelet[2500]: I0121 00:57:51.521000 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:57:51.521804 kubelet[2500]: I0121 00:57:51.521333 2500 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:57:51.523496 kubelet[2500]: I0121 00:57:51.523473 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:57:51.532230 kernel: audit: type=1325 audit(1768957071.529:331): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.529000 audit[2515]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.532406 kubelet[2500]: E0121 00:57:51.527679 2500 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-n-1ed4874c6e.188c99158e981ed9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-1ed4874c6e,UID:ci-4547-0-0-n-1ed4874c6e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-1ed4874c6e,},FirstTimestamp:2026-01-21 00:57:51.515459289 +0000 UTC m=+0.556849685,LastTimestamp:2026-01-21 00:57:51.515459289 +0000 UTC m=+0.556849685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-1ed4874c6e,}" Jan 21 00:57:51.532406 kubelet[2500]: I0121 00:57:51.530963 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 00:57:51.538209 kernel: audit: type=1300 audit(1768957071.529:331): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7bd22db0 a2=0 a3=0 items=0 ppid=2500 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.529000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7bd22db0 a2=0 a3=0 items=0 ppid=2500 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.538610 kubelet[2500]: E0121 00:57:51.535872 2500 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" Jan 21 00:57:51.538610 kubelet[2500]: I0121 00:57:51.535895 2500 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 00:57:51.538610 kubelet[2500]: I0121 00:57:51.536045 2500 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 00:57:51.538610 kubelet[2500]: I0121 00:57:51.536094 2500 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:57:51.538610 kubelet[2500]: E0121 00:57:51.536860 2500 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 00:57:51.538610 kubelet[2500]: I0121 00:57:51.537032 2500 factory.go:223] Registration of the systemd container factory successfully Jan 21 00:57:51.538610 kubelet[2500]: I0121 00:57:51.537101 2500 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 00:57:51.538610 kubelet[2500]: E0121 00:57:51.538305 2500 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 00:57:51.539141 kubelet[2500]: E0121 00:57:51.539118 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-1ed4874c6e?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Jan 21 00:57:51.540175 kubelet[2500]: I0121 00:57:51.539414 2500 factory.go:223] Registration of the containerd container factory successfully Jan 21 00:57:51.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 00:57:51.544271 kernel: audit: type=1327 audit(1768957071.529:331): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 00:57:51.544337 kernel: audit: type=1325 audit(1768957071.533:332): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.533000 audit[2516]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.533000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3a5cf520 a2=0 a3=0 items=0 ppid=2500 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.552176 kernel: audit: type=1300 audit(1768957071.533:332): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3a5cf520 a2=0 a3=0 items=0 ppid=2500 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 00:57:51.557206 kernel: audit: type=1327 audit(1768957071.533:332): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 00:57:51.557274 kernel: audit: type=1325 audit(1768957071.539:333): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.539000 audit[2518]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.539000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffedb292040 a2=0 a3=0 items=0 ppid=2500 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.561144 kernel: audit: type=1300 audit(1768957071.539:333): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffedb292040 a2=0 a3=0 items=0 ppid=2500 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:57:51.564268 kubelet[2500]: I0121 00:57:51.564245 2500 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 00:57:51.564801 kernel: audit: type=1327 audit(1768957071.539:333): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:57:51.565542 kubelet[2500]: I0121 00:57:51.565528 2500 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 00:57:51.565610 kubelet[2500]: I0121 00:57:51.565605 2500 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 00:57:51.565682 kubelet[2500]: I0121 00:57:51.565660 2500 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 00:57:51.565717 kubelet[2500]: I0121 00:57:51.565713 2500 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 00:57:51.565787 kubelet[2500]: E0121 00:57:51.565775 2500 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:57:51.541000 audit[2520]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.541000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd10d867b0 a2=0 a3=0 items=0 ppid=2500 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:57:51.562000 audit[2526]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.562000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffecc471380 a2=0 a3=0 items=0 ppid=2500 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 21 00:57:51.565000 audit[2527]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:51.565000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd94dffe50 a2=0 a3=0 items=0 ppid=2500 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 21 00:57:51.567000 audit[2528]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.567000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff019e3c60 a2=0 a3=0 items=0 ppid=2500 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 00:57:51.568000 audit[2529]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.568000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe204b93d0 a2=0 a3=0 items=0 ppid=2500 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 00:57:51.569000 audit[2530]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:57:51.569000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd1f38840 a2=0 a3=0 items=0 ppid=2500 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 00:57:51.570000 audit[2531]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:51.570000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3d43d8b0 a2=0 a3=0 items=0 ppid=2500 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 21 00:57:51.572000 audit[2532]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:51.572000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2d347eb0 a2=0 a3=0 items=0 ppid=2500 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 21 00:57:51.574191 kubelet[2500]: I0121 00:57:51.573788 2500 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 00:57:51.574191 kubelet[2500]: I0121 00:57:51.573802 2500 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 00:57:51.574191 kubelet[2500]: I0121 00:57:51.573817 2500 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:57:51.574000 audit[2533]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:57:51.574000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0ce57940 a2=0 a3=0 items=0 ppid=2500 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:51.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 21 00:57:51.574733 kubelet[2500]: E0121 00:57:51.574700 2500 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 00:57:51.576948 kubelet[2500]: I0121 00:57:51.576758 2500 policy_none.go:49] "None policy: Start" Jan 21 00:57:51.576948 kubelet[2500]: I0121 00:57:51.576775 2500 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 00:57:51.576948 kubelet[2500]: I0121 00:57:51.576784 2500 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:57:51.582798 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 21 00:57:51.598166 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 21 00:57:51.601923 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 21 00:57:51.621451 kubelet[2500]: E0121 00:57:51.621408 2500 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 00:57:51.621653 kubelet[2500]: I0121 00:57:51.621632 2500 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:57:51.621691 kubelet[2500]: I0121 00:57:51.621649 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:57:51.622991 kubelet[2500]: I0121 00:57:51.622696 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:57:51.624440 kubelet[2500]: E0121 00:57:51.624362 2500 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 00:57:51.624440 kubelet[2500]: E0121 00:57:51.624418 2500 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-n-1ed4874c6e\" not found" Jan 21 00:57:51.680227 systemd[1]: Created slice kubepods-burstable-pode1d4c05c3fd5caa63898673a5660057d.slice - libcontainer container kubepods-burstable-pode1d4c05c3fd5caa63898673a5660057d.slice. Jan 21 00:57:51.695473 kubelet[2500]: E0121 00:57:51.695429 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.699290 systemd[1]: Created slice kubepods-burstable-pod927fd1e4b419fedd568994b0f5d14964.slice - libcontainer container kubepods-burstable-pod927fd1e4b419fedd568994b0f5d14964.slice. Jan 21 00:57:51.711220 kubelet[2500]: E0121 00:57:51.711187 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.712948 systemd[1]: Created slice kubepods-burstable-podca97b713864d75bad670b26f2c2dbe9d.slice - libcontainer container kubepods-burstable-podca97b713864d75bad670b26f2c2dbe9d.slice. Jan 21 00:57:51.714664 kubelet[2500]: E0121 00:57:51.714630 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.724245 kubelet[2500]: I0121 00:57:51.724217 2500 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.724715 kubelet[2500]: E0121 00:57:51.724689 2500 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737211 kubelet[2500]: I0121 00:57:51.737173 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737211 kubelet[2500]: I0121 00:57:51.737210 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737369 kubelet[2500]: I0121 00:57:51.737229 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737369 kubelet[2500]: I0121 00:57:51.737246 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737369 kubelet[2500]: I0121 00:57:51.737279 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737369 kubelet[2500]: I0121 00:57:51.737298 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737369 kubelet[2500]: I0121 00:57:51.737325 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/927fd1e4b419fedd568994b0f5d14964-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-1ed4874c6e\" (UID: \"927fd1e4b419fedd568994b0f5d14964\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737477 kubelet[2500]: I0121 00:57:51.737344 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.737477 kubelet[2500]: I0121 00:57:51.737359 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.740662 kubelet[2500]: E0121 00:57:51.740629 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-1ed4874c6e?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Jan 21 00:57:51.927003 kubelet[2500]: I0121 00:57:51.926723 2500 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.927091 kubelet[2500]: E0121 00:57:51.927018 2500 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:51.997755 containerd[1672]: time="2026-01-21T00:57:51.997492968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-1ed4874c6e,Uid:e1d4c05c3fd5caa63898673a5660057d,Namespace:kube-system,Attempt:0,}" Jan 21 00:57:52.012561 containerd[1672]: time="2026-01-21T00:57:52.012521042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-1ed4874c6e,Uid:927fd1e4b419fedd568994b0f5d14964,Namespace:kube-system,Attempt:0,}" Jan 21 00:57:52.015369 containerd[1672]: time="2026-01-21T00:57:52.015243860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-1ed4874c6e,Uid:ca97b713864d75bad670b26f2c2dbe9d,Namespace:kube-system,Attempt:0,}" Jan 21 00:57:52.043988 containerd[1672]: time="2026-01-21T00:57:52.043918659Z" level=info msg="connecting to shim e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5" address="unix:///run/containerd/s/1bccbdc134e16526c2277008803e7e7004abd2c79811efc62445151219c8ec0f" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:57:52.075366 containerd[1672]: time="2026-01-21T00:57:52.075144635Z" level=info msg="connecting to shim b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370" address="unix:///run/containerd/s/8fe8dce4a43403dc7dfc81323195d0fd0d50a25210b455f70d66ef5de61d7a2a" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:57:52.080377 systemd[1]: Started cri-containerd-e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5.scope - libcontainer container e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5. Jan 21 00:57:52.086803 containerd[1672]: time="2026-01-21T00:57:52.086365840Z" level=info msg="connecting to shim d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044" address="unix:///run/containerd/s/6420c3327b43e3b28b240d3ef8bf046efda1afa14d6ba6a125c849ecb8540329" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:57:52.097000 audit: BPF prog-id=83 op=LOAD Jan 21 00:57:52.098000 audit: BPF prog-id=84 op=LOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=84 op=UNLOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=85 op=LOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=86 op=LOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=86 op=UNLOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=85 op=UNLOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.098000 audit: BPF prog-id=87 op=LOAD Jan 21 00:57:52.098000 audit[2555]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2543 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530336664656232333639333336613362623930623563333061373434 Jan 21 00:57:52.116383 systemd[1]: Started cri-containerd-b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370.scope - libcontainer container b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370. Jan 21 00:57:52.122751 systemd[1]: Started cri-containerd-d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044.scope - libcontainer container d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044. Jan 21 00:57:52.132000 audit: BPF prog-id=88 op=LOAD Jan 21 00:57:52.133000 audit: BPF prog-id=89 op=LOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=89 op=UNLOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=90 op=LOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=91 op=LOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=91 op=UNLOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=90 op=UNLOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.133000 audit: BPF prog-id=92 op=LOAD Jan 21 00:57:52.133000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239326534633233653136626333373531323935366232316135323631 Jan 21 00:57:52.141208 kubelet[2500]: E0121 00:57:52.141141 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-1ed4874c6e?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Jan 21 00:57:52.146000 audit: BPF prog-id=93 op=LOAD Jan 21 00:57:52.146000 audit: BPF prog-id=94 op=LOAD Jan 21 00:57:52.146000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=94 op=UNLOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=95 op=LOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=96 op=LOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=96 op=UNLOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=95 op=UNLOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.147000 audit: BPF prog-id=97 op=LOAD Jan 21 00:57:52.147000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431653836663164393231333761343931666638356133396234376165 Jan 21 00:57:52.162863 containerd[1672]: time="2026-01-21T00:57:52.162828034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-1ed4874c6e,Uid:e1d4c05c3fd5caa63898673a5660057d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5\"" Jan 21 00:57:52.168507 containerd[1672]: time="2026-01-21T00:57:52.168475622Z" level=info msg="CreateContainer within sandbox \"e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 21 00:57:52.185630 containerd[1672]: time="2026-01-21T00:57:52.185286931Z" level=info msg="Container 499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:57:52.198042 containerd[1672]: time="2026-01-21T00:57:52.198001594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-1ed4874c6e,Uid:927fd1e4b419fedd568994b0f5d14964,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044\"" Jan 21 00:57:52.199517 containerd[1672]: time="2026-01-21T00:57:52.199495465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-1ed4874c6e,Uid:ca97b713864d75bad670b26f2c2dbe9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370\"" Jan 21 00:57:52.199674 containerd[1672]: time="2026-01-21T00:57:52.199582257Z" level=info msg="CreateContainer within sandbox \"e03fdeb2369336a3bb90b5c30a7444a75ec2016dfe80ced70d1d65c1454b17f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed\"" Jan 21 00:57:52.200378 containerd[1672]: time="2026-01-21T00:57:52.200151307Z" level=info msg="StartContainer for \"499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed\"" Jan 21 00:57:52.202381 containerd[1672]: time="2026-01-21T00:57:52.202334424Z" level=info msg="connecting to shim 499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed" address="unix:///run/containerd/s/1bccbdc134e16526c2277008803e7e7004abd2c79811efc62445151219c8ec0f" protocol=ttrpc version=3 Jan 21 00:57:52.202910 containerd[1672]: time="2026-01-21T00:57:52.202880565Z" level=info msg="CreateContainer within sandbox \"d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 21 00:57:52.205714 containerd[1672]: time="2026-01-21T00:57:52.205620492Z" level=info msg="CreateContainer within sandbox \"b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 21 00:57:52.219223 containerd[1672]: time="2026-01-21T00:57:52.219196934Z" level=info msg="Container af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:57:52.222388 systemd[1]: Started cri-containerd-499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed.scope - libcontainer container 499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed. Jan 21 00:57:52.226175 containerd[1672]: time="2026-01-21T00:57:52.225084783Z" level=info msg="Container 8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:57:52.238990 containerd[1672]: time="2026-01-21T00:57:52.238933680Z" level=info msg="CreateContainer within sandbox \"d1e86f1d92137a491ff85a39b47aea505bf86fe832c8cf3a45d4d54222eae044\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c\"" Jan 21 00:57:52.237000 audit: BPF prog-id=98 op=LOAD Jan 21 00:57:52.239000 audit: BPF prog-id=99 op=LOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=99 op=UNLOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=100 op=LOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=101 op=LOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=101 op=UNLOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=100 op=UNLOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.239000 audit: BPF prog-id=102 op=LOAD Jan 21 00:57:52.239000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2543 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396330386564393661363432383336613735663463303138333666 Jan 21 00:57:52.240450 containerd[1672]: time="2026-01-21T00:57:52.239991241Z" level=info msg="StartContainer for \"af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c\"" Jan 21 00:57:52.241581 containerd[1672]: time="2026-01-21T00:57:52.241561429Z" level=info msg="connecting to shim af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c" address="unix:///run/containerd/s/6420c3327b43e3b28b240d3ef8bf046efda1afa14d6ba6a125c849ecb8540329" protocol=ttrpc version=3 Jan 21 00:57:52.251774 containerd[1672]: time="2026-01-21T00:57:52.251444147Z" level=info msg="CreateContainer within sandbox \"b92e4c23e16bc37512956b21a52614b96a32fc31e57985a71af19dc637bf8370\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224\"" Jan 21 00:57:52.254495 containerd[1672]: time="2026-01-21T00:57:52.254468809Z" level=info msg="StartContainer for \"8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224\"" Jan 21 00:57:52.255936 containerd[1672]: time="2026-01-21T00:57:52.255881322Z" level=info msg="connecting to shim 8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224" address="unix:///run/containerd/s/8fe8dce4a43403dc7dfc81323195d0fd0d50a25210b455f70d66ef5de61d7a2a" protocol=ttrpc version=3 Jan 21 00:57:52.266532 systemd[1]: Started cri-containerd-af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c.scope - libcontainer container af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c. Jan 21 00:57:52.277424 systemd[1]: Started cri-containerd-8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224.scope - libcontainer container 8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224. Jan 21 00:57:52.289000 audit: BPF prog-id=103 op=LOAD Jan 21 00:57:52.291000 audit: BPF prog-id=104 op=LOAD Jan 21 00:57:52.291000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.291000 audit: BPF prog-id=104 op=UNLOAD Jan 21 00:57:52.291000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.292000 audit: BPF prog-id=105 op=LOAD Jan 21 00:57:52.292000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.292000 audit: BPF prog-id=106 op=LOAD Jan 21 00:57:52.292000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.292000 audit: BPF prog-id=106 op=UNLOAD Jan 21 00:57:52.292000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.292000 audit: BPF prog-id=105 op=UNLOAD Jan 21 00:57:52.292000 audit[2698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.292000 audit: BPF prog-id=107 op=LOAD Jan 21 00:57:52.292000 audit[2698]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2591 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166366365613939653463303434616539393234373265626363613263 Jan 21 00:57:52.297000 audit: BPF prog-id=108 op=LOAD Jan 21 00:57:52.298000 audit: BPF prog-id=109 op=LOAD Jan 21 00:57:52.298000 audit[2714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.298000 audit: BPF prog-id=109 op=UNLOAD Jan 21 00:57:52.298000 audit[2714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.299000 audit: BPF prog-id=110 op=LOAD Jan 21 00:57:52.299000 audit[2714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.299000 audit: BPF prog-id=111 op=LOAD Jan 21 00:57:52.299000 audit[2714]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.299000 audit: BPF prog-id=111 op=UNLOAD Jan 21 00:57:52.299000 audit[2714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.299000 audit: BPF prog-id=110 op=UNLOAD Jan 21 00:57:52.299000 audit[2714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.299000 audit: BPF prog-id=112 op=LOAD Jan 21 00:57:52.299000 audit[2714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2574 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:57:52.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861356265313062616331626332326264633830616430666437373436 Jan 21 00:57:52.304811 containerd[1672]: time="2026-01-21T00:57:52.304451617Z" level=info msg="StartContainer for \"499c08ed96a642836a75f4c01836f996b9f2e0b9b6da1d49ea1bfccf780282ed\" returns successfully" Jan 21 00:57:52.329258 kubelet[2500]: I0121 00:57:52.329096 2500 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:52.331169 kubelet[2500]: E0121 00:57:52.329479 2500 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:52.345893 containerd[1672]: time="2026-01-21T00:57:52.345865413Z" level=info msg="StartContainer for \"af6cea99e4c044ae992472ebcca2ccebcdb099f893e247653bb3c89ab2f2b02c\" returns successfully" Jan 21 00:57:52.362102 containerd[1672]: time="2026-01-21T00:57:52.362068268Z" level=info msg="StartContainer for \"8a5be10bac1bc22bdc80ad0fd774642da4558272e216ab5ba8c84c5835847224\" returns successfully" Jan 21 00:57:52.580224 kubelet[2500]: E0121 00:57:52.578593 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:52.585051 kubelet[2500]: E0121 00:57:52.584858 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:52.592250 kubelet[2500]: E0121 00:57:52.592224 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:53.131986 kubelet[2500]: I0121 00:57:53.131776 2500 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:53.587587 kubelet[2500]: E0121 00:57:53.587458 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:53.588003 kubelet[2500]: E0121 00:57:53.587907 2500 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.120510 kubelet[2500]: E0121 00:57:54.120475 2500 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-n-1ed4874c6e\" not found" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.213168 kubelet[2500]: I0121 00:57:54.212804 2500 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.239304 kubelet[2500]: I0121 00:57:54.239278 2500 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.261364 kubelet[2500]: E0121 00:57:54.261310 2500 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.261364 kubelet[2500]: I0121 00:57:54.261338 2500 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.263383 kubelet[2500]: E0121 00:57:54.263271 2500 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-1ed4874c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.263383 kubelet[2500]: I0121 00:57:54.263284 2500 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.266055 kubelet[2500]: E0121 00:57:54.266044 2500 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:54.512442 kubelet[2500]: I0121 00:57:54.512399 2500 apiserver.go:52] "Watching apiserver" Jan 21 00:57:54.536961 kubelet[2500]: I0121 00:57:54.536935 2500 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 00:57:56.028189 kubelet[2500]: I0121 00:57:56.027946 2500 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:56.280573 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-8.scope)... Jan 21 00:57:56.280946 systemd[1]: Reloading... Jan 21 00:57:56.366176 zram_generator::config[2834]: No configuration found. Jan 21 00:57:56.564422 systemd[1]: Reloading finished in 283 ms. Jan 21 00:57:56.600404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:56.615290 systemd[1]: kubelet.service: Deactivated successfully. Jan 21 00:57:56.615551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:56.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.616219 kernel: kauditd_printk_skb: 159 callbacks suppressed Jan 21 00:57:56.616269 kernel: audit: type=1131 audit(1768957076.615:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.619581 systemd[1]: kubelet.service: Consumed 844ms CPU time, 129.3M memory peak. Jan 21 00:57:56.621467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 21 00:57:56.624755 kernel: audit: type=1334 audit(1768957076.621:392): prog-id=113 op=LOAD Jan 21 00:57:56.624807 kernel: audit: type=1334 audit(1768957076.621:393): prog-id=77 op=UNLOAD Jan 21 00:57:56.621000 audit: BPF prog-id=113 op=LOAD Jan 21 00:57:56.621000 audit: BPF prog-id=77 op=UNLOAD Jan 21 00:57:56.621000 audit: BPF prog-id=114 op=LOAD Jan 21 00:57:56.631188 kernel: audit: type=1334 audit(1768957076.621:394): prog-id=114 op=LOAD Jan 21 00:57:56.631241 kernel: audit: type=1334 audit(1768957076.621:395): prog-id=115 op=LOAD Jan 21 00:57:56.621000 audit: BPF prog-id=115 op=LOAD Jan 21 00:57:56.621000 audit: BPF prog-id=78 op=UNLOAD Jan 21 00:57:56.632303 kernel: audit: type=1334 audit(1768957076.621:396): prog-id=78 op=UNLOAD Jan 21 00:57:56.621000 audit: BPF prog-id=79 op=UNLOAD Jan 21 00:57:56.633600 kernel: audit: type=1334 audit(1768957076.621:397): prog-id=79 op=UNLOAD Jan 21 00:57:56.622000 audit: BPF prog-id=116 op=LOAD Jan 21 00:57:56.635432 kernel: audit: type=1334 audit(1768957076.622:398): prog-id=116 op=LOAD Jan 21 00:57:56.635484 kernel: audit: type=1334 audit(1768957076.622:399): prog-id=80 op=UNLOAD Jan 21 00:57:56.622000 audit: BPF prog-id=80 op=UNLOAD Jan 21 00:57:56.623000 audit: BPF prog-id=117 op=LOAD Jan 21 00:57:56.623000 audit: BPF prog-id=118 op=LOAD Jan 21 00:57:56.623000 audit: BPF prog-id=81 op=UNLOAD Jan 21 00:57:56.623000 audit: BPF prog-id=82 op=UNLOAD Jan 21 00:57:56.623000 audit: BPF prog-id=119 op=LOAD Jan 21 00:57:56.623000 audit: BPF prog-id=74 op=UNLOAD Jan 21 00:57:56.625000 audit: BPF prog-id=120 op=LOAD Jan 21 00:57:56.625000 audit: BPF prog-id=63 op=UNLOAD Jan 21 00:57:56.625000 audit: BPF prog-id=121 op=LOAD Jan 21 00:57:56.640177 kernel: audit: type=1334 audit(1768957076.623:400): prog-id=117 op=LOAD Jan 21 00:57:56.625000 audit: BPF prog-id=122 op=LOAD Jan 21 00:57:56.625000 audit: BPF prog-id=64 op=UNLOAD Jan 21 00:57:56.625000 audit: BPF prog-id=65 op=UNLOAD Jan 21 00:57:56.626000 audit: BPF prog-id=123 op=LOAD Jan 21 00:57:56.626000 audit: BPF prog-id=75 op=UNLOAD Jan 21 00:57:56.627000 audit: BPF prog-id=124 op=LOAD Jan 21 00:57:56.627000 audit: BPF prog-id=76 op=UNLOAD Jan 21 00:57:56.627000 audit: BPF prog-id=125 op=LOAD Jan 21 00:57:56.627000 audit: BPF prog-id=66 op=UNLOAD Jan 21 00:57:56.627000 audit: BPF prog-id=126 op=LOAD Jan 21 00:57:56.627000 audit: BPF prog-id=127 op=LOAD Jan 21 00:57:56.627000 audit: BPF prog-id=67 op=UNLOAD Jan 21 00:57:56.627000 audit: BPF prog-id=68 op=UNLOAD Jan 21 00:57:56.627000 audit: BPF prog-id=128 op=LOAD Jan 21 00:57:56.627000 audit: BPF prog-id=69 op=UNLOAD Jan 21 00:57:56.628000 audit: BPF prog-id=129 op=LOAD Jan 21 00:57:56.628000 audit: BPF prog-id=130 op=LOAD Jan 21 00:57:56.628000 audit: BPF prog-id=70 op=UNLOAD Jan 21 00:57:56.628000 audit: BPF prog-id=71 op=UNLOAD Jan 21 00:57:56.628000 audit: BPF prog-id=131 op=LOAD Jan 21 00:57:56.628000 audit: BPF prog-id=132 op=LOAD Jan 21 00:57:56.628000 audit: BPF prog-id=72 op=UNLOAD Jan 21 00:57:56.628000 audit: BPF prog-id=73 op=UNLOAD Jan 21 00:57:56.741838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 21 00:57:56.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:57:56.751525 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 21 00:57:56.787192 kubelet[2881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:57:56.787192 kubelet[2881]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 00:57:56.787192 kubelet[2881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 00:57:56.787192 kubelet[2881]: I0121 00:57:56.786816 2881 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 00:57:56.793209 kubelet[2881]: I0121 00:57:56.792782 2881 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 21 00:57:56.793209 kubelet[2881]: I0121 00:57:56.792803 2881 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 00:57:56.793209 kubelet[2881]: I0121 00:57:56.792974 2881 server.go:956] "Client rotation is on, will bootstrap in background" Jan 21 00:57:56.794296 kubelet[2881]: I0121 00:57:56.794282 2881 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 00:57:57.153728 update_engine[1653]: I20260121 00:57:57.153599 1653 update_attempter.cc:509] Updating boot flags... Jan 21 00:57:57.486396 kubelet[2881]: I0121 00:57:57.407497 2881 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 21 00:57:57.486396 kubelet[2881]: I0121 00:57:57.414119 2881 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 00:57:57.486396 kubelet[2881]: I0121 00:57:57.417888 2881 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 21 00:57:57.486396 kubelet[2881]: I0121 00:57:57.418095 2881 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 00:57:57.486608 kubelet[2881]: I0121 00:57:57.418116 2881 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-1ed4874c6e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 00:57:57.486608 kubelet[2881]: I0121 00:57:57.418382 2881 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 00:57:57.486608 kubelet[2881]: I0121 00:57:57.418391 2881 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 00:57:57.486608 kubelet[2881]: I0121 00:57:57.418465 2881 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:57:57.486608 kubelet[2881]: I0121 00:57:57.418655 2881 kubelet.go:480] "Attempting to sync node with API server" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.418665 2881 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.418685 2881 kubelet.go:386] "Adding apiserver pod source" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.418722 2881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.475541 2881 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.476213 2881 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.479788 2881 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.479847 2881 server.go:1289] "Started kubelet" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.480825 2881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.482184 2881 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.482977 2881 server.go:317] "Adding debug handlers to kubelet server" Jan 21 00:57:57.488013 kubelet[2881]: I0121 00:57:57.486018 2881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 00:57:57.488657 kubelet[2881]: I0121 00:57:57.488636 2881 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 21 00:57:57.489505 kubelet[2881]: I0121 00:57:57.489398 2881 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 00:57:57.491478 kubelet[2881]: I0121 00:57:57.491466 2881 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 00:57:57.493083 kubelet[2881]: I0121 00:57:57.491699 2881 reconciler.go:26] "Reconciler: start to sync state" Jan 21 00:57:57.493322 kubelet[2881]: I0121 00:57:57.493220 2881 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 00:57:57.502469 kubelet[2881]: I0121 00:57:57.502448 2881 factory.go:223] Registration of the containerd container factory successfully Jan 21 00:57:57.503025 kubelet[2881]: I0121 00:57:57.502979 2881 factory.go:223] Registration of the systemd container factory successfully Jan 21 00:57:57.503942 kubelet[2881]: E0121 00:57:57.503930 2881 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 21 00:57:57.504218 kubelet[2881]: I0121 00:57:57.503632 2881 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 21 00:57:57.589821 kubelet[2881]: I0121 00:57:57.589498 2881 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 00:57:57.596898 kubelet[2881]: I0121 00:57:57.596870 2881 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 00:57:57.596898 kubelet[2881]: I0121 00:57:57.596894 2881 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 00:57:57.596898 kubelet[2881]: I0121 00:57:57.596914 2881 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 00:57:57.597093 kubelet[2881]: I0121 00:57:57.596922 2881 kubelet.go:2436] "Starting kubelet main sync loop" Jan 21 00:57:57.597093 kubelet[2881]: E0121 00:57:57.596967 2881 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 00:57:57.651091 kubelet[2881]: I0121 00:57:57.651032 2881 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 21 00:57:57.651091 kubelet[2881]: I0121 00:57:57.651046 2881 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 21 00:57:57.651091 kubelet[2881]: I0121 00:57:57.651062 2881 state_mem.go:36] "Initialized new in-memory state store" Jan 21 00:57:57.652375 kubelet[2881]: I0121 00:57:57.652260 2881 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 21 00:57:57.652375 kubelet[2881]: I0121 00:57:57.652271 2881 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 21 00:57:57.652375 kubelet[2881]: I0121 00:57:57.652287 2881 policy_none.go:49] "None policy: Start" Jan 21 00:57:57.652375 kubelet[2881]: I0121 00:57:57.652296 2881 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 00:57:57.652375 kubelet[2881]: I0121 00:57:57.652305 2881 state_mem.go:35] "Initializing new in-memory state store" Jan 21 00:57:57.652470 kubelet[2881]: I0121 00:57:57.652389 2881 state_mem.go:75] "Updated machine memory state" Jan 21 00:57:57.670341 kubelet[2881]: E0121 00:57:57.670318 2881 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 00:57:57.670982 kubelet[2881]: I0121 00:57:57.670463 2881 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 00:57:57.670982 kubelet[2881]: I0121 00:57:57.670476 2881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 00:57:57.670982 kubelet[2881]: I0121 00:57:57.670914 2881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 00:57:57.686518 kubelet[2881]: E0121 00:57:57.686493 2881 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 21 00:57:57.697646 kubelet[2881]: I0121 00:57:57.697619 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.701306 kubelet[2881]: I0121 00:57:57.698383 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.703895 kubelet[2881]: I0121 00:57:57.698480 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.721186 kubelet[2881]: E0121 00:57:57.720420 2881 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-1ed4874c6e\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.792111 kubelet[2881]: I0121 00:57:57.791667 2881 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.795845 kubelet[2881]: I0121 00:57:57.795578 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.795845 kubelet[2881]: I0121 00:57:57.795608 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.795845 kubelet[2881]: I0121 00:57:57.795650 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.795845 kubelet[2881]: I0121 00:57:57.795668 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/927fd1e4b419fedd568994b0f5d14964-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-1ed4874c6e\" (UID: \"927fd1e4b419fedd568994b0f5d14964\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.795845 kubelet[2881]: I0121 00:57:57.795689 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.796015 kubelet[2881]: I0121 00:57:57.795703 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.796015 kubelet[2881]: I0121 00:57:57.795717 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1d4c05c3fd5caa63898673a5660057d-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-1ed4874c6e\" (UID: \"e1d4c05c3fd5caa63898673a5660057d\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.796015 kubelet[2881]: I0121 00:57:57.795730 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.796015 kubelet[2881]: I0121 00:57:57.795744 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca97b713864d75bad670b26f2c2dbe9d-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" (UID: \"ca97b713864d75bad670b26f2c2dbe9d\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.800452 kubelet[2881]: I0121 00:57:57.800374 2881 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:57.800548 kubelet[2881]: I0121 00:57:57.800507 2881 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:58.424401 kubelet[2881]: I0121 00:57:58.424361 2881 apiserver.go:52] "Watching apiserver" Jan 21 00:57:58.493915 kubelet[2881]: I0121 00:57:58.493862 2881 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 00:57:58.629583 kubelet[2881]: I0121 00:57:58.629531 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-n-1ed4874c6e" podStartSLOduration=2.629515275 podStartE2EDuration="2.629515275s" podCreationTimestamp="2026-01-21 00:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:57:58.629304589 +0000 UTC m=+1.873157874" watchObservedRunningTime="2026-01-21 00:57:58.629515275 +0000 UTC m=+1.873368550" Jan 21 00:57:58.638870 kubelet[2881]: I0121 00:57:58.638812 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-1ed4874c6e" podStartSLOduration=1.638795833 podStartE2EDuration="1.638795833s" podCreationTimestamp="2026-01-21 00:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:57:58.637659012 +0000 UTC m=+1.881512298" watchObservedRunningTime="2026-01-21 00:57:58.638795833 +0000 UTC m=+1.882649095" Jan 21 00:57:58.645796 kubelet[2881]: I0121 00:57:58.645726 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:57:58.650560 kubelet[2881]: I0121 00:57:58.650407 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" podStartSLOduration=1.650394296 podStartE2EDuration="1.650394296s" podCreationTimestamp="2026-01-21 00:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:57:58.649523631 +0000 UTC m=+1.893376916" watchObservedRunningTime="2026-01-21 00:57:58.650394296 +0000 UTC m=+1.894247557" Jan 21 00:57:58.653231 kubelet[2881]: E0121 00:57:58.653089 2881 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-1ed4874c6e\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:01.937658 kubelet[2881]: I0121 00:58:01.937587 2881 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 21 00:58:01.938660 kubelet[2881]: I0121 00:58:01.938480 2881 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 21 00:58:01.938780 containerd[1672]: time="2026-01-21T00:58:01.938263296Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 21 00:58:02.889839 systemd[1]: Created slice kubepods-besteffort-pod5313f094_6ac2_40d4_8312_a51c1127d021.slice - libcontainer container kubepods-besteffort-pod5313f094_6ac2_40d4_8312_a51c1127d021.slice. Jan 21 00:58:02.927171 kubelet[2881]: I0121 00:58:02.927116 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5313f094-6ac2-40d4-8312-a51c1127d021-xtables-lock\") pod \"kube-proxy-nkkgw\" (UID: \"5313f094-6ac2-40d4-8312-a51c1127d021\") " pod="kube-system/kube-proxy-nkkgw" Jan 21 00:58:02.927171 kubelet[2881]: I0121 00:58:02.927162 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5313f094-6ac2-40d4-8312-a51c1127d021-lib-modules\") pod \"kube-proxy-nkkgw\" (UID: \"5313f094-6ac2-40d4-8312-a51c1127d021\") " pod="kube-system/kube-proxy-nkkgw" Jan 21 00:58:02.927324 kubelet[2881]: I0121 00:58:02.927184 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5313f094-6ac2-40d4-8312-a51c1127d021-kube-proxy\") pod \"kube-proxy-nkkgw\" (UID: \"5313f094-6ac2-40d4-8312-a51c1127d021\") " pod="kube-system/kube-proxy-nkkgw" Jan 21 00:58:02.927324 kubelet[2881]: I0121 00:58:02.927199 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmln7\" (UniqueName: \"kubernetes.io/projected/5313f094-6ac2-40d4-8312-a51c1127d021-kube-api-access-zmln7\") pod \"kube-proxy-nkkgw\" (UID: \"5313f094-6ac2-40d4-8312-a51c1127d021\") " pod="kube-system/kube-proxy-nkkgw" Jan 21 00:58:03.069078 systemd[1]: Created slice kubepods-besteffort-pode5c90e9e_30ae_45d0_97cd_21717852dcde.slice - libcontainer container kubepods-besteffort-pode5c90e9e_30ae_45d0_97cd_21717852dcde.slice. Jan 21 00:58:03.128975 kubelet[2881]: I0121 00:58:03.128902 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5c90e9e-30ae-45d0-97cd-21717852dcde-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2hxhn\" (UID: \"e5c90e9e-30ae-45d0-97cd-21717852dcde\") " pod="tigera-operator/tigera-operator-7dcd859c48-2hxhn" Jan 21 00:58:03.128975 kubelet[2881]: I0121 00:58:03.128961 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj59\" (UniqueName: \"kubernetes.io/projected/e5c90e9e-30ae-45d0-97cd-21717852dcde-kube-api-access-qdj59\") pod \"tigera-operator-7dcd859c48-2hxhn\" (UID: \"e5c90e9e-30ae-45d0-97cd-21717852dcde\") " pod="tigera-operator/tigera-operator-7dcd859c48-2hxhn" Jan 21 00:58:03.197940 containerd[1672]: time="2026-01-21T00:58:03.197864773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkkgw,Uid:5313f094-6ac2-40d4-8312-a51c1127d021,Namespace:kube-system,Attempt:0,}" Jan 21 00:58:03.276704 containerd[1672]: time="2026-01-21T00:58:03.276618306Z" level=info msg="connecting to shim 1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514" address="unix:///run/containerd/s/e8350abfa6cb7ed3599d356f3d76fee31bc611ac289d1f1f92dd59169972ce39" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:03.305385 systemd[1]: Started cri-containerd-1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514.scope - libcontainer container 1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514. Jan 21 00:58:03.313000 audit: BPF prog-id=133 op=LOAD Jan 21 00:58:03.315699 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 21 00:58:03.315746 kernel: audit: type=1334 audit(1768957083.313:433): prog-id=133 op=LOAD Jan 21 00:58:03.316000 audit: BPF prog-id=134 op=LOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.319947 kernel: audit: type=1334 audit(1768957083.316:434): prog-id=134 op=LOAD Jan 21 00:58:03.319984 kernel: audit: type=1300 audit(1768957083.316:434): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.323700 kernel: audit: type=1327 audit(1768957083.316:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=134 op=UNLOAD Jan 21 00:58:03.328649 kernel: audit: type=1334 audit(1768957083.316:435): prog-id=134 op=UNLOAD Jan 21 00:58:03.328701 kernel: audit: type=1300 audit(1768957083.316:435): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.333791 kernel: audit: type=1327 audit(1768957083.316:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=135 op=LOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.339187 kernel: audit: type=1334 audit(1768957083.316:436): prog-id=135 op=LOAD Jan 21 00:58:03.339253 kernel: audit: type=1300 audit(1768957083.316:436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.343101 kernel: audit: type=1327 audit(1768957083.316:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=136 op=LOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=136 op=UNLOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=135 op=UNLOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.316000 audit: BPF prog-id=137 op=LOAD Jan 21 00:58:03.316000 audit[2968]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2957 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164653138333935393235376163643365333836336132636238353664 Jan 21 00:58:03.348360 containerd[1672]: time="2026-01-21T00:58:03.348333934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nkkgw,Uid:5313f094-6ac2-40d4-8312-a51c1127d021,Namespace:kube-system,Attempt:0,} returns sandbox id \"1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514\"" Jan 21 00:58:03.354675 containerd[1672]: time="2026-01-21T00:58:03.354577494Z" level=info msg="CreateContainer within sandbox \"1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 21 00:58:03.374833 containerd[1672]: time="2026-01-21T00:58:03.374795726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2hxhn,Uid:e5c90e9e-30ae-45d0-97cd-21717852dcde,Namespace:tigera-operator,Attempt:0,}" Jan 21 00:58:03.379167 containerd[1672]: time="2026-01-21T00:58:03.379079758Z" level=info msg="Container 9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:03.396947 containerd[1672]: time="2026-01-21T00:58:03.396885383Z" level=info msg="CreateContainer within sandbox \"1de183959257acd3e3863a2cb856d6ef56985460b6eb11312c14ae23e6b43514\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006\"" Jan 21 00:58:03.397947 containerd[1672]: time="2026-01-21T00:58:03.397817588Z" level=info msg="StartContainer for \"9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006\"" Jan 21 00:58:03.400194 containerd[1672]: time="2026-01-21T00:58:03.400147670Z" level=info msg="connecting to shim 9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006" address="unix:///run/containerd/s/e8350abfa6cb7ed3599d356f3d76fee31bc611ac289d1f1f92dd59169972ce39" protocol=ttrpc version=3 Jan 21 00:58:03.411560 containerd[1672]: time="2026-01-21T00:58:03.411525773Z" level=info msg="connecting to shim 51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50" address="unix:///run/containerd/s/37b364db6d34b3627605844c6a053ed4063b0af3326408a24d05f08f99ecb745" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:03.418491 systemd[1]: Started cri-containerd-9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006.scope - libcontainer container 9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006. Jan 21 00:58:03.440358 systemd[1]: Started cri-containerd-51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50.scope - libcontainer container 51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50. Jan 21 00:58:03.450000 audit: BPF prog-id=138 op=LOAD Jan 21 00:58:03.451000 audit: BPF prog-id=139 op=LOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=139 op=UNLOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=140 op=LOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=141 op=LOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=141 op=UNLOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=140 op=UNLOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.451000 audit: BPF prog-id=142 op=LOAD Jan 21 00:58:03.451000 audit[3025]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3013 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531653134386336363265343430656333646635323530376435666335 Jan 21 00:58:03.463000 audit: BPF prog-id=143 op=LOAD Jan 21 00:58:03.463000 audit[2994]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2957 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961336639663463353664376431663038343035636336613061386131 Jan 21 00:58:03.463000 audit: BPF prog-id=144 op=LOAD Jan 21 00:58:03.463000 audit[2994]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2957 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961336639663463353664376431663038343035636336613061386131 Jan 21 00:58:03.463000 audit: BPF prog-id=144 op=UNLOAD Jan 21 00:58:03.463000 audit[2994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961336639663463353664376431663038343035636336613061386131 Jan 21 00:58:03.463000 audit: BPF prog-id=143 op=UNLOAD Jan 21 00:58:03.463000 audit[2994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2957 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961336639663463353664376431663038343035636336613061386131 Jan 21 00:58:03.463000 audit: BPF prog-id=145 op=LOAD Jan 21 00:58:03.463000 audit[2994]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2957 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961336639663463353664376431663038343035636336613061386131 Jan 21 00:58:03.489473 containerd[1672]: time="2026-01-21T00:58:03.489434329Z" level=info msg="StartContainer for \"9a3f9f4c56d7d1f08405cc6a0a8a1ecafeb3129990c2827aefb941831884e006\" returns successfully" Jan 21 00:58:03.514604 containerd[1672]: time="2026-01-21T00:58:03.514567320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2hxhn,Uid:e5c90e9e-30ae-45d0-97cd-21717852dcde,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50\"" Jan 21 00:58:03.517972 containerd[1672]: time="2026-01-21T00:58:03.517750205Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 21 00:58:03.626000 audit[3102]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.626000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb06ec240 a2=0 a3=7ffdb06ec22c items=0 ppid=3027 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.626000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 00:58:03.628000 audit[3103]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.628000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc512ef130 a2=0 a3=7ffc512ef11c items=0 ppid=3027 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.628000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 00:58:03.629000 audit[3105]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.629000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0ed4e430 a2=0 a3=7ffc0ed4e41c items=0 ppid=3027 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 00:58:03.631000 audit[3107]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.631000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe78290470 a2=0 a3=7ffe7829045c items=0 ppid=3027 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 21 00:58:03.634000 audit[3108]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.634000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3ae4ec40 a2=0 a3=7ffd3ae4ec2c items=0 ppid=3027 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.634000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 21 00:58:03.635000 audit[3109]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.635000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff94c9d00 a2=0 a3=7ffff94c9cec items=0 ppid=3027 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 21 00:58:03.666440 kubelet[2881]: I0121 00:58:03.666394 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nkkgw" podStartSLOduration=1.666377243 podStartE2EDuration="1.666377243s" podCreationTimestamp="2026-01-21 00:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:58:03.6661286 +0000 UTC m=+6.909981863" watchObservedRunningTime="2026-01-21 00:58:03.666377243 +0000 UTC m=+6.910230518" Jan 21 00:58:03.734000 audit[3110]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.734000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc151a6d20 a2=0 a3=7ffc151a6d0c items=0 ppid=3027 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 00:58:03.737000 audit[3112]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.737000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdebc81270 a2=0 a3=7ffdebc8125c items=0 ppid=3027 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 21 00:58:03.742000 audit[3115]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.742000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff3b6d6c80 a2=0 a3=7fff3b6d6c6c items=0 ppid=3027 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 21 00:58:03.744000 audit[3116]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.744000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3079a3c0 a2=0 a3=7fff3079a3ac items=0 ppid=3027 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 00:58:03.747000 audit[3118]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.747000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8fbc2ac0 a2=0 a3=7fff8fbc2aac items=0 ppid=3027 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 00:58:03.748000 audit[3119]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.748000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd04ccbf80 a2=0 a3=7ffd04ccbf6c items=0 ppid=3027 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 00:58:03.751000 audit[3121]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.751000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc83c459c0 a2=0 a3=7ffc83c459ac items=0 ppid=3027 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 00:58:03.756000 audit[3124]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.756000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdef812500 a2=0 a3=7ffdef8124ec items=0 ppid=3027 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 21 00:58:03.757000 audit[3125]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.757000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6ff2ab10 a2=0 a3=7ffe6ff2aafc items=0 ppid=3027 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 00:58:03.760000 audit[3127]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.760000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc680b8a0 a2=0 a3=7ffdc680b88c items=0 ppid=3027 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 00:58:03.761000 audit[3128]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.761000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdadbf4fb0 a2=0 a3=7ffdadbf4f9c items=0 ppid=3027 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 00:58:03.764000 audit[3130]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.764000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd69f52810 a2=0 a3=7ffd69f527fc items=0 ppid=3027 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:58:03.768000 audit[3133]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.768000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4137bd60 a2=0 a3=7ffc4137bd4c items=0 ppid=3027 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:58:03.776000 audit[3136]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.776000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdad37a530 a2=0 a3=7ffdad37a51c items=0 ppid=3027 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 00:58:03.778000 audit[3137]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.778000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd4d63580 a2=0 a3=7fffd4d6356c items=0 ppid=3027 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 00:58:03.781000 audit[3139]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.781000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc4ebbaa50 a2=0 a3=7ffc4ebbaa3c items=0 ppid=3027 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:58:03.785000 audit[3142]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.785000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa7c2c920 a2=0 a3=7fffa7c2c90c items=0 ppid=3027 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:58:03.787000 audit[3143]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.787000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda37e04e0 a2=0 a3=7ffda37e04cc items=0 ppid=3027 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 00:58:03.789000 audit[3145]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 21 00:58:03.789000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc74fabe50 a2=0 a3=7ffc74fabe3c items=0 ppid=3027 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 00:58:03.824000 audit[3151]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:03.824000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe82e254b0 a2=0 a3=7ffe82e2549c items=0 ppid=3027 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:03.835000 audit[3151]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:03.835000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe82e254b0 a2=0 a3=7ffe82e2549c items=0 ppid=3027 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:03.837000 audit[3156]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.837000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe06398db0 a2=0 a3=7ffe06398d9c items=0 ppid=3027 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 21 00:58:03.841000 audit[3158]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.841000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcd6e0c5a0 a2=0 a3=7ffcd6e0c58c items=0 ppid=3027 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 21 00:58:03.845000 audit[3161]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.845000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe30ee5530 a2=0 a3=7ffe30ee551c items=0 ppid=3027 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 21 00:58:03.847000 audit[3162]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.847000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedc35bc90 a2=0 a3=7ffedc35bc7c items=0 ppid=3027 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 21 00:58:03.849000 audit[3164]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.849000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff0d39b1d0 a2=0 a3=7fff0d39b1bc items=0 ppid=3027 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 21 00:58:03.850000 audit[3165]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.850000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce6eea600 a2=0 a3=7ffce6eea5ec items=0 ppid=3027 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 21 00:58:03.854000 audit[3167]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.854000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1c6c59b0 a2=0 a3=7ffc1c6c599c items=0 ppid=3027 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 21 00:58:03.858000 audit[3170]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.858000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff015b2ca0 a2=0 a3=7fff015b2c8c items=0 ppid=3027 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 21 00:58:03.859000 audit[3171]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.859000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13bf5d70 a2=0 a3=7fff13bf5d5c items=0 ppid=3027 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 21 00:58:03.865000 audit[3173]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.865000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5ea98f40 a2=0 a3=7ffe5ea98f2c items=0 ppid=3027 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 21 00:58:03.867000 audit[3174]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.867000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9f462f80 a2=0 a3=7ffe9f462f6c items=0 ppid=3027 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 21 00:58:03.871000 audit[3176]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.871000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9283fe40 a2=0 a3=7ffc9283fe2c items=0 ppid=3027 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 21 00:58:03.875000 audit[3179]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.875000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdae848c0 a2=0 a3=7ffcdae848ac items=0 ppid=3027 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 21 00:58:03.879000 audit[3182]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.879000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe04611520 a2=0 a3=7ffe0461150c items=0 ppid=3027 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 21 00:58:03.881000 audit[3183]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.881000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc521a94d0 a2=0 a3=7ffc521a94bc items=0 ppid=3027 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 21 00:58:03.884000 audit[3185]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.884000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff46a95420 a2=0 a3=7fff46a9540c items=0 ppid=3027 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:58:03.887000 audit[3188]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.887000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe327e56d0 a2=0 a3=7ffe327e56bc items=0 ppid=3027 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 21 00:58:03.889000 audit[3189]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.889000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbbe60110 a2=0 a3=7ffdbbe600fc items=0 ppid=3027 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 21 00:58:03.892000 audit[3191]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.892000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeaabbb700 a2=0 a3=7ffeaabbb6ec items=0 ppid=3027 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.892000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 21 00:58:03.893000 audit[3192]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.893000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4b9473a0 a2=0 a3=7fff4b94738c items=0 ppid=3027 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 21 00:58:03.897000 audit[3194]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.897000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcbbbf7d0 a2=0 a3=7ffdcbbbf7bc items=0 ppid=3027 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:58:03.902000 audit[3197]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 21 00:58:03.902000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdfdd413b0 a2=0 a3=7ffdfdd4139c items=0 ppid=3027 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 21 00:58:03.906000 audit[3199]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 00:58:03.906000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd95579ac0 a2=0 a3=7ffd95579aac items=0 ppid=3027 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.906000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:03.907000 audit[3199]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 21 00:58:03.907000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd95579ac0 a2=0 a3=7ffd95579aac items=0 ppid=3027 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:03.907000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:04.045128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24421929.mount: Deactivated successfully. Jan 21 00:58:05.488775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465463301.mount: Deactivated successfully. Jan 21 00:58:05.985645 containerd[1672]: time="2026-01-21T00:58:05.985571469Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:05.987031 containerd[1672]: time="2026-01-21T00:58:05.986838027Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 21 00:58:05.988355 containerd[1672]: time="2026-01-21T00:58:05.988321828Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:05.992085 containerd[1672]: time="2026-01-21T00:58:05.992016850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:05.992627 containerd[1672]: time="2026-01-21T00:58:05.992601529Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.474821173s" Jan 21 00:58:05.992691 containerd[1672]: time="2026-01-21T00:58:05.992679881Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 21 00:58:06.002314 containerd[1672]: time="2026-01-21T00:58:06.002267993Z" level=info msg="CreateContainer within sandbox \"51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 21 00:58:06.033229 containerd[1672]: time="2026-01-21T00:58:06.032980562Z" level=info msg="Container f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:06.044368 containerd[1672]: time="2026-01-21T00:58:06.044335066Z" level=info msg="CreateContainer within sandbox \"51e148c662e440ec3df52507d5fc5d9ef4bc5f112b859cdc1543981c02b84b50\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74\"" Jan 21 00:58:06.045033 containerd[1672]: time="2026-01-21T00:58:06.044951025Z" level=info msg="StartContainer for \"f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74\"" Jan 21 00:58:06.046826 containerd[1672]: time="2026-01-21T00:58:06.046803843Z" level=info msg="connecting to shim f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74" address="unix:///run/containerd/s/37b364db6d34b3627605844c6a053ed4063b0af3326408a24d05f08f99ecb745" protocol=ttrpc version=3 Jan 21 00:58:06.065337 systemd[1]: Started cri-containerd-f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74.scope - libcontainer container f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74. Jan 21 00:58:06.074000 audit: BPF prog-id=146 op=LOAD Jan 21 00:58:06.074000 audit: BPF prog-id=147 op=LOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=147 op=UNLOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=148 op=LOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=149 op=LOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=149 op=UNLOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=148 op=UNLOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.074000 audit: BPF prog-id=150 op=LOAD Jan 21 00:58:06.074000 audit[3208]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3013 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:06.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336539353734643666376439336639613337643333653437636465 Jan 21 00:58:06.094133 containerd[1672]: time="2026-01-21T00:58:06.094099536Z" level=info msg="StartContainer for \"f53e9574d6f7d93f9a37d33e47cde70dcf91c9e05ff14142bd8678a1bb0d1f74\" returns successfully" Jan 21 00:58:06.454697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609663394.mount: Deactivated successfully. Jan 21 00:58:11.579359 sudo[1929]: pam_unix(sudo:session): session closed for user root Jan 21 00:58:11.578000 audit[1929]: USER_END pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.581481 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 21 00:58:11.581562 kernel: audit: type=1106 audit(1768957091.578:513): pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.578000 audit[1929]: CRED_DISP pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.589189 kernel: audit: type=1104 audit(1768957091.578:514): pid=1929 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.682863 sshd[1928]: Connection closed by 4.153.228.146 port 37192 Jan 21 00:58:11.683682 sshd-session[1924]: pam_unix(sshd:session): session closed for user core Jan 21 00:58:11.684000 audit[1924]: USER_END pid=1924 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:58:11.690195 kernel: audit: type=1106 audit(1768957091.684:515): pid=1924 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:58:11.684000 audit[1924]: CRED_DISP pid=1924 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:58:11.692847 systemd[1]: sshd@6-10.0.0.94:22-4.153.228.146:37192.service: Deactivated successfully. Jan 21 00:58:11.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-4.153.228.146:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.694985 kernel: audit: type=1104 audit(1768957091.684:516): pid=1924 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 00:58:11.695037 kernel: audit: type=1131 audit(1768957091.692:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-4.153.228.146:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:58:11.699319 systemd[1]: session-8.scope: Deactivated successfully. Jan 21 00:58:11.699717 systemd[1]: session-8.scope: Consumed 4.809s CPU time, 236.4M memory peak. Jan 21 00:58:11.702084 systemd-logind[1652]: Session 8 logged out. Waiting for processes to exit. Jan 21 00:58:11.704181 systemd-logind[1652]: Removed session 8. Jan 21 00:58:12.084000 audit[3287]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.089182 kernel: audit: type=1325 audit(1768957092.084:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.084000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdd4166d0 a2=0 a3=7ffcdd4166bc items=0 ppid=3027 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.096178 kernel: audit: type=1300 audit(1768957092.084:518): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdd4166d0 a2=0 a3=7ffcdd4166bc items=0 ppid=3027 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:12.102173 kernel: audit: type=1327 audit(1768957092.084:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:12.089000 audit[3287]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.106185 kernel: audit: type=1325 audit(1768957092.089:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.089000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdd4166d0 a2=0 a3=0 items=0 ppid=3027 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.112134 kernel: audit: type=1300 audit(1768957092.089:519): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdd4166d0 a2=0 a3=0 items=0 ppid=3027 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:12.134000 audit[3289]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.134000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd1416a070 a2=0 a3=7ffd1416a05c items=0 ppid=3027 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:12.139000 audit[3289]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:12.139000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1416a070 a2=0 a3=0 items=0 ppid=3027 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:12.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:14.394000 audit[3291]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:14.394000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc8aff9730 a2=0 a3=7ffc8aff971c items=0 ppid=3027 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:14.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:14.400000 audit[3291]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:14.400000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8aff9730 a2=0 a3=0 items=0 ppid=3027 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:14.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:14.409000 audit[3293]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:14.409000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffce218ee90 a2=0 a3=7ffce218ee7c items=0 ppid=3027 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:14.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:14.412000 audit[3293]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:14.412000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce218ee90 a2=0 a3=0 items=0 ppid=3027 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:14.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:15.431000 audit[3295]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:15.431000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb5ca12b0 a2=0 a3=7ffdb5ca129c items=0 ppid=3027 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:15.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:15.434000 audit[3295]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:15.434000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb5ca12b0 a2=0 a3=0 items=0 ppid=3027 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:15.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:16.027703 kubelet[2881]: I0121 00:58:16.027269 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2hxhn" podStartSLOduration=10.549789539 podStartE2EDuration="13.027255369s" podCreationTimestamp="2026-01-21 00:58:03 +0000 UTC" firstStartedPulling="2026-01-21 00:58:03.51593969 +0000 UTC m=+6.759792952" lastFinishedPulling="2026-01-21 00:58:05.99340552 +0000 UTC m=+9.237258782" observedRunningTime="2026-01-21 00:58:06.675287025 +0000 UTC m=+9.919140308" watchObservedRunningTime="2026-01-21 00:58:16.027255369 +0000 UTC m=+19.271108655" Jan 21 00:58:16.042151 systemd[1]: Created slice kubepods-besteffort-pod85ecaf5a_2339_4027_9345_58e7312d51c3.slice - libcontainer container kubepods-besteffort-pod85ecaf5a_2339_4027_9345_58e7312d51c3.slice. Jan 21 00:58:16.110977 kubelet[2881]: I0121 00:58:16.110938 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85ecaf5a-2339-4027-9345-58e7312d51c3-typha-certs\") pod \"calico-typha-7474c44f46-cvpk7\" (UID: \"85ecaf5a-2339-4027-9345-58e7312d51c3\") " pod="calico-system/calico-typha-7474c44f46-cvpk7" Jan 21 00:58:16.111519 kubelet[2881]: I0121 00:58:16.111234 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zd5s\" (UniqueName: \"kubernetes.io/projected/85ecaf5a-2339-4027-9345-58e7312d51c3-kube-api-access-2zd5s\") pod \"calico-typha-7474c44f46-cvpk7\" (UID: \"85ecaf5a-2339-4027-9345-58e7312d51c3\") " pod="calico-system/calico-typha-7474c44f46-cvpk7" Jan 21 00:58:16.111519 kubelet[2881]: I0121 00:58:16.111259 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85ecaf5a-2339-4027-9345-58e7312d51c3-tigera-ca-bundle\") pod \"calico-typha-7474c44f46-cvpk7\" (UID: \"85ecaf5a-2339-4027-9345-58e7312d51c3\") " pod="calico-system/calico-typha-7474c44f46-cvpk7" Jan 21 00:58:16.253779 systemd[1]: Created slice kubepods-besteffort-pod45ceeb16_7cc5_4d6a_a96b_d458b7b7e7ad.slice - libcontainer container kubepods-besteffort-pod45ceeb16_7cc5_4d6a_a96b_d458b7b7e7ad.slice. Jan 21 00:58:16.313884 kubelet[2881]: I0121 00:58:16.313520 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-var-run-calico\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.313884 kubelet[2881]: I0121 00:58:16.313565 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-cni-bin-dir\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.313884 kubelet[2881]: I0121 00:58:16.313580 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-policysync\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.313884 kubelet[2881]: I0121 00:58:16.313606 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-var-lib-calico\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.313884 kubelet[2881]: I0121 00:58:16.313621 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h66fp\" (UniqueName: \"kubernetes.io/projected/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-kube-api-access-h66fp\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314081 kubelet[2881]: I0121 00:58:16.313680 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-flexvol-driver-host\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314081 kubelet[2881]: I0121 00:58:16.313721 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-cni-log-dir\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314081 kubelet[2881]: I0121 00:58:16.313736 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-lib-modules\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314081 kubelet[2881]: I0121 00:58:16.313752 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-node-certs\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314081 kubelet[2881]: I0121 00:58:16.313767 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-xtables-lock\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314224 kubelet[2881]: I0121 00:58:16.313787 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-cni-net-dir\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.314224 kubelet[2881]: I0121 00:58:16.313800 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad-tigera-ca-bundle\") pod \"calico-node-bgbpm\" (UID: \"45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad\") " pod="calico-system/calico-node-bgbpm" Jan 21 00:58:16.346251 containerd[1672]: time="2026-01-21T00:58:16.346174343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7474c44f46-cvpk7,Uid:85ecaf5a-2339-4027-9345-58e7312d51c3,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:16.376857 containerd[1672]: time="2026-01-21T00:58:16.376818574Z" level=info msg="connecting to shim d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46" address="unix:///run/containerd/s/adcde7bcd5ae3cd53bca3b1dac27532acb9fb0982b15cd3c7002e1a58cbeb5fe" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:16.406514 systemd[1]: Started cri-containerd-d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46.scope - libcontainer container d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46. Jan 21 00:58:16.415816 kubelet[2881]: E0121 00:58:16.415573 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.415967 kubelet[2881]: W0121 00:58:16.415953 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.416035 kubelet[2881]: E0121 00:58:16.416026 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.417277 kubelet[2881]: E0121 00:58:16.417259 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.417462 kubelet[2881]: W0121 00:58:16.417363 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.417462 kubelet[2881]: E0121 00:58:16.417381 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.417613 kubelet[2881]: E0121 00:58:16.417606 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.417757 kubelet[2881]: W0121 00:58:16.417650 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.417757 kubelet[2881]: E0121 00:58:16.417660 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.417889 kubelet[2881]: E0121 00:58:16.417882 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.417927 kubelet[2881]: W0121 00:58:16.417921 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.417964 kubelet[2881]: E0121 00:58:16.417958 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.418170 kubelet[2881]: E0121 00:58:16.418121 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.418170 kubelet[2881]: W0121 00:58:16.418128 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.418170 kubelet[2881]: E0121 00:58:16.418134 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.418514 kubelet[2881]: E0121 00:58:16.418361 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.418514 kubelet[2881]: W0121 00:58:16.418416 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.418514 kubelet[2881]: E0121 00:58:16.418425 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.418913 kubelet[2881]: E0121 00:58:16.418903 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.419122 kubelet[2881]: W0121 00:58:16.419039 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.419122 kubelet[2881]: E0121 00:58:16.419053 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.419370 kubelet[2881]: E0121 00:58:16.419363 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.419421 kubelet[2881]: W0121 00:58:16.419414 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.419524 kubelet[2881]: E0121 00:58:16.419457 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.420131 kubelet[2881]: E0121 00:58:16.420048 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.420131 kubelet[2881]: W0121 00:58:16.420058 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.420131 kubelet[2881]: E0121 00:58:16.420067 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.420326 kubelet[2881]: E0121 00:58:16.420319 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.420366 kubelet[2881]: W0121 00:58:16.420360 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.420407 kubelet[2881]: E0121 00:58:16.420401 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.420636 kubelet[2881]: E0121 00:58:16.420576 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.420636 kubelet[2881]: W0121 00:58:16.420582 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.420636 kubelet[2881]: E0121 00:58:16.420589 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.420946 kubelet[2881]: E0121 00:58:16.420860 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.420991 kubelet[2881]: W0121 00:58:16.420984 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.421033 kubelet[2881]: E0121 00:58:16.421027 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.421455 kubelet[2881]: E0121 00:58:16.421252 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.421455 kubelet[2881]: W0121 00:58:16.421380 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.421455 kubelet[2881]: E0121 00:58:16.421391 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.425391 kubelet[2881]: E0121 00:58:16.421622 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.425498 kubelet[2881]: W0121 00:58:16.425483 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.425556 kubelet[2881]: E0121 00:58:16.425547 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.426810 kubelet[2881]: E0121 00:58:16.426798 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.426983 kubelet[2881]: W0121 00:58:16.426868 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.426983 kubelet[2881]: E0121 00:58:16.426881 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.427084 kubelet[2881]: E0121 00:58:16.427077 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.427117 kubelet[2881]: W0121 00:58:16.427111 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.427234 kubelet[2881]: E0121 00:58:16.427160 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.427788 kubelet[2881]: E0121 00:58:16.427778 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.427849 kubelet[2881]: W0121 00:58:16.427842 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.428479 kubelet[2881]: E0121 00:58:16.427881 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.430030 kubelet[2881]: E0121 00:58:16.429927 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.430030 kubelet[2881]: W0121 00:58:16.429939 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.430030 kubelet[2881]: E0121 00:58:16.429950 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.431172 kubelet[2881]: E0121 00:58:16.430458 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.432011 kubelet[2881]: W0121 00:58:16.431900 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.432011 kubelet[2881]: E0121 00:58:16.431919 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.432226 kubelet[2881]: E0121 00:58:16.432124 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.432226 kubelet[2881]: W0121 00:58:16.432133 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.432226 kubelet[2881]: E0121 00:58:16.432141 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.432411 kubelet[2881]: E0121 00:58:16.432337 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.432411 kubelet[2881]: W0121 00:58:16.432345 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.432411 kubelet[2881]: E0121 00:58:16.432352 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.434214 kubelet[2881]: E0121 00:58:16.433952 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.435404 kubelet[2881]: W0121 00:58:16.434256 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434271 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434432 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.435404 kubelet[2881]: W0121 00:58:16.434438 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434445 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434694 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.435404 kubelet[2881]: W0121 00:58:16.434700 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434707 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.435404 kubelet[2881]: E0121 00:58:16.434837 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.435404 kubelet[2881]: W0121 00:58:16.434845 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.435800 kubelet[2881]: E0121 00:58:16.434851 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.435800 kubelet[2881]: E0121 00:58:16.434982 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.435800 kubelet[2881]: W0121 00:58:16.434988 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.435800 kubelet[2881]: E0121 00:58:16.434994 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.436273 kubelet[2881]: E0121 00:58:16.435929 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.436273 kubelet[2881]: W0121 00:58:16.435939 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.436273 kubelet[2881]: E0121 00:58:16.435981 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.436368 kubelet[2881]: E0121 00:58:16.436286 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:16.437093 kubelet[2881]: E0121 00:58:16.436434 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.437173 kubelet[2881]: W0121 00:58:16.437149 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.437307 kubelet[2881]: E0121 00:58:16.437205 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.437400 kubelet[2881]: E0121 00:58:16.437392 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.437438 kubelet[2881]: W0121 00:58:16.437432 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.437470 kubelet[2881]: E0121 00:58:16.437465 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.437678 kubelet[2881]: E0121 00:58:16.437659 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.437883 kubelet[2881]: W0121 00:58:16.437723 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.437883 kubelet[2881]: E0121 00:58:16.437732 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.438061 kubelet[2881]: E0121 00:58:16.438045 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.438061 kubelet[2881]: W0121 00:58:16.438060 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.438119 kubelet[2881]: E0121 00:58:16.438072 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.438524 kubelet[2881]: E0121 00:58:16.438511 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.438524 kubelet[2881]: W0121 00:58:16.438521 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.438593 kubelet[2881]: E0121 00:58:16.438529 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.438964 kubelet[2881]: E0121 00:58:16.438952 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.438964 kubelet[2881]: W0121 00:58:16.438962 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.439035 kubelet[2881]: E0121 00:58:16.438970 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.441663 kubelet[2881]: E0121 00:58:16.441583 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.441730 kubelet[2881]: W0121 00:58:16.441599 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.441730 kubelet[2881]: E0121 00:58:16.441711 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.456000 audit: BPF prog-id=151 op=LOAD Jan 21 00:58:16.458741 kubelet[2881]: E0121 00:58:16.458150 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.458741 kubelet[2881]: W0121 00:58:16.458188 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.458741 kubelet[2881]: E0121 00:58:16.458203 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.457000 audit: BPF prog-id=152 op=LOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=152 op=UNLOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=153 op=LOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=154 op=LOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=154 op=UNLOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=153 op=UNLOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.457000 audit: BPF prog-id=155 op=LOAD Jan 21 00:58:16.457000 audit[3318]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3306 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436353361383835613337353930666363346366366663333235643863 Jan 21 00:58:16.461000 audit[3387]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:16.461000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb26f8170 a2=0 a3=7ffeb26f815c items=0 ppid=3027 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:16.467000 audit[3387]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:16.467000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb26f8170 a2=0 a3=0 items=0 ppid=3027 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:16.501548 kubelet[2881]: E0121 00:58:16.501510 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.501548 kubelet[2881]: W0121 00:58:16.501530 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.501769 kubelet[2881]: E0121 00:58:16.501558 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.501769 kubelet[2881]: E0121 00:58:16.501720 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.501769 kubelet[2881]: W0121 00:58:16.501726 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.501769 kubelet[2881]: E0121 00:58:16.501733 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.501891 kubelet[2881]: E0121 00:58:16.501877 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.501891 kubelet[2881]: W0121 00:58:16.501882 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.501891 kubelet[2881]: E0121 00:58:16.501888 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.502063 kubelet[2881]: E0121 00:58:16.502054 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.502063 kubelet[2881]: W0121 00:58:16.502062 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.502121 kubelet[2881]: E0121 00:58:16.502074 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.503196 kubelet[2881]: E0121 00:58:16.503182 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.503196 kubelet[2881]: W0121 00:58:16.503194 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.503405 kubelet[2881]: E0121 00:58:16.503203 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.503405 kubelet[2881]: E0121 00:58:16.503371 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.503405 kubelet[2881]: W0121 00:58:16.503376 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.503405 kubelet[2881]: E0121 00:58:16.503383 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.503779 kubelet[2881]: E0121 00:58:16.503517 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.503779 kubelet[2881]: W0121 00:58:16.503523 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.503779 kubelet[2881]: E0121 00:58:16.503529 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.503779 kubelet[2881]: E0121 00:58:16.503656 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.503779 kubelet[2881]: W0121 00:58:16.503661 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.503779 kubelet[2881]: E0121 00:58:16.503666 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.503781 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504067 kubelet[2881]: W0121 00:58:16.503798 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.503805 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.503912 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504067 kubelet[2881]: W0121 00:58:16.503917 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.503922 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.504046 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504067 kubelet[2881]: W0121 00:58:16.504050 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504067 kubelet[2881]: E0121 00:58:16.504056 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504179 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504861 kubelet[2881]: W0121 00:58:16.504184 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504189 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504325 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504861 kubelet[2881]: W0121 00:58:16.504333 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504339 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504449 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.504861 kubelet[2881]: W0121 00:58:16.504465 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504470 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.504861 kubelet[2881]: E0121 00:58:16.504574 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.505489 kubelet[2881]: W0121 00:58:16.504579 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.505489 kubelet[2881]: E0121 00:58:16.504585 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.505489 kubelet[2881]: E0121 00:58:16.504709 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.505489 kubelet[2881]: W0121 00:58:16.505180 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.505489 kubelet[2881]: E0121 00:58:16.505195 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.505489 kubelet[2881]: E0121 00:58:16.505343 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.505489 kubelet[2881]: W0121 00:58:16.505349 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.505489 kubelet[2881]: E0121 00:58:16.505355 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.505790 kubelet[2881]: E0121 00:58:16.505548 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.505790 kubelet[2881]: W0121 00:58:16.505554 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.505790 kubelet[2881]: E0121 00:58:16.505560 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.505790 kubelet[2881]: E0121 00:58:16.505754 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.505790 kubelet[2881]: W0121 00:58:16.505762 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.505790 kubelet[2881]: E0121 00:58:16.505768 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.506065 kubelet[2881]: E0121 00:58:16.506053 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.506065 kubelet[2881]: W0121 00:58:16.506061 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.506108 kubelet[2881]: E0121 00:58:16.506070 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.514297 containerd[1672]: time="2026-01-21T00:58:16.514011244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7474c44f46-cvpk7,Uid:85ecaf5a-2339-4027-9345-58e7312d51c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46\"" Jan 21 00:58:16.516385 kubelet[2881]: E0121 00:58:16.516361 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.516385 kubelet[2881]: W0121 00:58:16.516377 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.516734 kubelet[2881]: E0121 00:58:16.516609 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.516734 kubelet[2881]: I0121 00:58:16.516637 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4683584a-9c9b-48ab-9b3d-c5a314d23b04-kubelet-dir\") pod \"csi-node-driver-gng52\" (UID: \"4683584a-9c9b-48ab-9b3d-c5a314d23b04\") " pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:16.517546 containerd[1672]: time="2026-01-21T00:58:16.516975998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 21 00:58:16.517597 kubelet[2881]: E0121 00:58:16.517418 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.517597 kubelet[2881]: W0121 00:58:16.517429 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.517597 kubelet[2881]: E0121 00:58:16.517442 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.517597 kubelet[2881]: I0121 00:58:16.517458 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4gn\" (UniqueName: \"kubernetes.io/projected/4683584a-9c9b-48ab-9b3d-c5a314d23b04-kube-api-access-sj4gn\") pod \"csi-node-driver-gng52\" (UID: \"4683584a-9c9b-48ab-9b3d-c5a314d23b04\") " pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:16.518199 kubelet[2881]: E0121 00:58:16.518183 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.518199 kubelet[2881]: W0121 00:58:16.518195 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.518315 kubelet[2881]: E0121 00:58:16.518206 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.518315 kubelet[2881]: I0121 00:58:16.518329 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4683584a-9c9b-48ab-9b3d-c5a314d23b04-socket-dir\") pod \"csi-node-driver-gng52\" (UID: \"4683584a-9c9b-48ab-9b3d-c5a314d23b04\") " pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:16.518545 kubelet[2881]: E0121 00:58:16.518534 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.518765 kubelet[2881]: W0121 00:58:16.518585 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.518765 kubelet[2881]: E0121 00:58:16.518599 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.519014 kubelet[2881]: E0121 00:58:16.518935 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.519173 kubelet[2881]: W0121 00:58:16.519065 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.519603 kubelet[2881]: E0121 00:58:16.519282 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.520316 kubelet[2881]: E0121 00:58:16.520213 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.520316 kubelet[2881]: W0121 00:58:16.520224 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.520316 kubelet[2881]: E0121 00:58:16.520235 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.521229 kubelet[2881]: E0121 00:58:16.521217 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.521369 kubelet[2881]: W0121 00:58:16.521284 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.521369 kubelet[2881]: E0121 00:58:16.521296 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.521527 kubelet[2881]: E0121 00:58:16.521520 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.521570 kubelet[2881]: W0121 00:58:16.521564 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.521609 kubelet[2881]: E0121 00:58:16.521601 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.521714 kubelet[2881]: I0121 00:58:16.521696 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4683584a-9c9b-48ab-9b3d-c5a314d23b04-registration-dir\") pod \"csi-node-driver-gng52\" (UID: \"4683584a-9c9b-48ab-9b3d-c5a314d23b04\") " pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:16.522037 kubelet[2881]: E0121 00:58:16.521949 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.522037 kubelet[2881]: W0121 00:58:16.521958 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.522037 kubelet[2881]: E0121 00:58:16.521967 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.522324 kubelet[2881]: E0121 00:58:16.522316 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.523177 kubelet[2881]: W0121 00:58:16.522387 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.523177 kubelet[2881]: E0121 00:58:16.522399 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.523349 kubelet[2881]: E0121 00:58:16.523340 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.523502 kubelet[2881]: W0121 00:58:16.523383 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.523502 kubelet[2881]: E0121 00:58:16.523394 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.523502 kubelet[2881]: I0121 00:58:16.523419 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4683584a-9c9b-48ab-9b3d-c5a314d23b04-varrun\") pod \"csi-node-driver-gng52\" (UID: \"4683584a-9c9b-48ab-9b3d-c5a314d23b04\") " pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:16.523653 kubelet[2881]: E0121 00:58:16.523646 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.523689 kubelet[2881]: W0121 00:58:16.523683 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.523723 kubelet[2881]: E0121 00:58:16.523717 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.523962 kubelet[2881]: E0121 00:58:16.523873 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.523962 kubelet[2881]: W0121 00:58:16.523880 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.523962 kubelet[2881]: E0121 00:58:16.523894 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.524086 kubelet[2881]: E0121 00:58:16.524079 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.524119 kubelet[2881]: W0121 00:58:16.524113 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.524165 kubelet[2881]: E0121 00:58:16.524145 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.525275 kubelet[2881]: E0121 00:58:16.525263 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.525354 kubelet[2881]: W0121 00:58:16.525327 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.525354 kubelet[2881]: E0121 00:58:16.525338 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.557483 containerd[1672]: time="2026-01-21T00:58:16.557435511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bgbpm,Uid:45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:16.584882 containerd[1672]: time="2026-01-21T00:58:16.584687809Z" level=info msg="connecting to shim dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e" address="unix:///run/containerd/s/321c0e9c2e8f66486f70a7e71e41393fe29baff708f6294a722c84a93d247a1e" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:16.612458 systemd[1]: Started cri-containerd-dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e.scope - libcontainer container dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e. Jan 21 00:58:16.625371 kubelet[2881]: E0121 00:58:16.625303 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.625537 kubelet[2881]: W0121 00:58:16.625406 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.625537 kubelet[2881]: E0121 00:58:16.625425 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.625738 kubelet[2881]: E0121 00:58:16.625727 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.625738 kubelet[2881]: W0121 00:58:16.625737 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.625788 kubelet[2881]: E0121 00:58:16.625747 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.626167 kubelet[2881]: E0121 00:58:16.626148 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.626256 kubelet[2881]: W0121 00:58:16.626246 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.626302 kubelet[2881]: E0121 00:58:16.626258 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.626443 kubelet[2881]: E0121 00:58:16.626435 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.626443 kubelet[2881]: W0121 00:58:16.626443 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.626491 kubelet[2881]: E0121 00:58:16.626449 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.626756 kubelet[2881]: E0121 00:58:16.626745 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.626756 kubelet[2881]: W0121 00:58:16.626755 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.626803 kubelet[2881]: E0121 00:58:16.626762 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.627131 kubelet[2881]: E0121 00:58:16.627122 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.627169 kubelet[2881]: W0121 00:58:16.627131 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.627169 kubelet[2881]: E0121 00:58:16.627140 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.627588 kubelet[2881]: E0121 00:58:16.627577 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.627621 kubelet[2881]: W0121 00:58:16.627588 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.627621 kubelet[2881]: E0121 00:58:16.627597 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.628148 kubelet[2881]: E0121 00:58:16.628136 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.628148 kubelet[2881]: W0121 00:58:16.628147 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.628277 kubelet[2881]: E0121 00:58:16.628266 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.628477 kubelet[2881]: E0121 00:58:16.628467 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.628477 kubelet[2881]: W0121 00:58:16.628476 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.628538 kubelet[2881]: E0121 00:58:16.628483 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.628742 kubelet[2881]: E0121 00:58:16.628733 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.628742 kubelet[2881]: W0121 00:58:16.628741 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.628801 kubelet[2881]: E0121 00:58:16.628748 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.628923 kubelet[2881]: E0121 00:58:16.628914 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.628923 kubelet[2881]: W0121 00:58:16.628923 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.628975 kubelet[2881]: E0121 00:58:16.628931 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.629280 kubelet[2881]: E0121 00:58:16.629270 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.629280 kubelet[2881]: W0121 00:58:16.629278 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.629349 kubelet[2881]: E0121 00:58:16.629286 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.629554 kubelet[2881]: E0121 00:58:16.629545 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.629582 kubelet[2881]: W0121 00:58:16.629554 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.629582 kubelet[2881]: E0121 00:58:16.629561 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.629951 kubelet[2881]: E0121 00:58:16.629939 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.629951 kubelet[2881]: W0121 00:58:16.629951 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.630021 kubelet[2881]: E0121 00:58:16.629960 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.630297 kubelet[2881]: E0121 00:58:16.630287 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.630297 kubelet[2881]: W0121 00:58:16.630296 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.630353 kubelet[2881]: E0121 00:58:16.630304 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.630635 kubelet[2881]: E0121 00:58:16.630626 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.630661 kubelet[2881]: W0121 00:58:16.630635 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.630661 kubelet[2881]: E0121 00:58:16.630642 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.631138 kubelet[2881]: E0121 00:58:16.631039 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.631138 kubelet[2881]: W0121 00:58:16.631046 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.631138 kubelet[2881]: E0121 00:58:16.631053 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.631357 kubelet[2881]: E0121 00:58:16.631346 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.631357 kubelet[2881]: W0121 00:58:16.631356 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.631437 kubelet[2881]: E0121 00:58:16.631363 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.631596 kubelet[2881]: E0121 00:58:16.631586 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.631912 kubelet[2881]: W0121 00:58:16.631605 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.631912 kubelet[2881]: E0121 00:58:16.631613 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.631912 kubelet[2881]: E0121 00:58:16.631768 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.631912 kubelet[2881]: W0121 00:58:16.631775 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.631912 kubelet[2881]: E0121 00:58:16.631781 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.632141 kubelet[2881]: E0121 00:58:16.632129 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.632290 kubelet[2881]: W0121 00:58:16.632141 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.632290 kubelet[2881]: E0121 00:58:16.632149 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.632953 kubelet[2881]: E0121 00:58:16.632940 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.632953 kubelet[2881]: W0121 00:58:16.632950 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.633084 kubelet[2881]: E0121 00:58:16.632960 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.633912 kubelet[2881]: E0121 00:58:16.633900 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.633912 kubelet[2881]: W0121 00:58:16.633911 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.633993 kubelet[2881]: E0121 00:58:16.633920 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.634292 kubelet[2881]: E0121 00:58:16.634263 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.634292 kubelet[2881]: W0121 00:58:16.634272 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.634292 kubelet[2881]: E0121 00:58:16.634280 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.634000 audit: BPF prog-id=156 op=LOAD Jan 21 00:58:16.635787 kubelet[2881]: E0121 00:58:16.635620 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.635787 kubelet[2881]: W0121 00:58:16.635631 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.635787 kubelet[2881]: E0121 00:58:16.635643 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.636397 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 21 00:58:16.636433 kernel: audit: type=1334 audit(1768957096.634:538): prog-id=156 op=LOAD Jan 21 00:58:16.637000 audit: BPF prog-id=157 op=LOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.641653 kernel: audit: type=1334 audit(1768957096.637:539): prog-id=157 op=LOAD Jan 21 00:58:16.641716 kernel: audit: type=1300 audit(1768957096.637:539): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.648141 kubelet[2881]: E0121 00:58:16.648073 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:16.648141 kubelet[2881]: W0121 00:58:16.648091 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:16.648141 kubelet[2881]: E0121 00:58:16.648110 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:16.637000 audit: BPF prog-id=157 op=UNLOAD Jan 21 00:58:16.650747 kernel: audit: type=1327 audit(1768957096.637:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.650814 kernel: audit: type=1334 audit(1768957096.637:540): prog-id=157 op=UNLOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.653181 kernel: audit: type=1300 audit(1768957096.637:540): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.661181 kernel: audit: type=1327 audit(1768957096.637:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.637000 audit: BPF prog-id=158 op=LOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.665508 kernel: audit: type=1334 audit(1768957096.637:541): prog-id=158 op=LOAD Jan 21 00:58:16.665570 kernel: audit: type=1300 audit(1768957096.637:541): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.676179 kernel: audit: type=1327 audit(1768957096.637:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.637000 audit: BPF prog-id=159 op=LOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.637000 audit: BPF prog-id=159 op=UNLOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.637000 audit: BPF prog-id=158 op=UNLOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.637000 audit: BPF prog-id=160 op=LOAD Jan 21 00:58:16.637000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3439 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:16.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463656132376564313830666238616562623562616563313331316361 Jan 21 00:58:16.680266 containerd[1672]: time="2026-01-21T00:58:16.680221087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bgbpm,Uid:45ceeb16-7cc5-4d6a-a96b-d458b7b7e7ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\"" Jan 21 00:58:17.598434 kubelet[2881]: E0121 00:58:17.597883 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:18.055319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144665537.mount: Deactivated successfully. Jan 21 00:58:18.982950 containerd[1672]: time="2026-01-21T00:58:18.982451037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:18.984914 containerd[1672]: time="2026-01-21T00:58:18.984874255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 21 00:58:18.987131 containerd[1672]: time="2026-01-21T00:58:18.986902944Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:18.989246 containerd[1672]: time="2026-01-21T00:58:18.989208685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:18.990274 containerd[1672]: time="2026-01-21T00:58:18.990187662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.473179282s" Jan 21 00:58:18.990274 containerd[1672]: time="2026-01-21T00:58:18.990213185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 21 00:58:18.991521 containerd[1672]: time="2026-01-21T00:58:18.991372017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 21 00:58:19.005361 containerd[1672]: time="2026-01-21T00:58:19.005328065Z" level=info msg="CreateContainer within sandbox \"d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 21 00:58:19.016487 containerd[1672]: time="2026-01-21T00:58:19.016385673Z" level=info msg="Container a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:19.019481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278135742.mount: Deactivated successfully. Jan 21 00:58:19.030506 containerd[1672]: time="2026-01-21T00:58:19.030411316Z" level=info msg="CreateContainer within sandbox \"d653a885a37590fcc4cf6fc325d8c1a46f9c5020dce52c3021cf73869aeece46\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5\"" Jan 21 00:58:19.031301 containerd[1672]: time="2026-01-21T00:58:19.031281413Z" level=info msg="StartContainer for \"a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5\"" Jan 21 00:58:19.032690 containerd[1672]: time="2026-01-21T00:58:19.032224234Z" level=info msg="connecting to shim a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5" address="unix:///run/containerd/s/adcde7bcd5ae3cd53bca3b1dac27532acb9fb0982b15cd3c7002e1a58cbeb5fe" protocol=ttrpc version=3 Jan 21 00:58:19.058396 systemd[1]: Started cri-containerd-a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5.scope - libcontainer container a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5. Jan 21 00:58:19.069000 audit: BPF prog-id=161 op=LOAD Jan 21 00:58:19.070000 audit: BPF prog-id=162 op=LOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=162 op=UNLOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=163 op=LOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=164 op=LOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=164 op=UNLOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=163 op=UNLOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.070000 audit: BPF prog-id=165 op=LOAD Jan 21 00:58:19.070000 audit[3514]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3306 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:19.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383230643632326430613965613635303536393235393232303866 Jan 21 00:58:19.114760 containerd[1672]: time="2026-01-21T00:58:19.114125470Z" level=info msg="StartContainer for \"a6820d622d0a9ea6505692592208f3724acfc4ad3199d5e080d21d8b226eeeb5\" returns successfully" Jan 21 00:58:19.598063 kubelet[2881]: E0121 00:58:19.597651 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:19.724226 kubelet[2881]: E0121 00:58:19.724198 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.724422 kubelet[2881]: W0121 00:58:19.724264 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.724422 kubelet[2881]: E0121 00:58:19.724283 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.724649 kubelet[2881]: E0121 00:58:19.724638 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.724720 kubelet[2881]: W0121 00:58:19.724696 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.724720 kubelet[2881]: E0121 00:58:19.724708 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.724978 kubelet[2881]: E0121 00:58:19.724929 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.724978 kubelet[2881]: W0121 00:58:19.724941 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.724978 kubelet[2881]: E0121 00:58:19.724948 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.725256 kubelet[2881]: E0121 00:58:19.725249 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.725361 kubelet[2881]: W0121 00:58:19.725314 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.725361 kubelet[2881]: E0121 00:58:19.725325 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.725569 kubelet[2881]: E0121 00:58:19.725533 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.725569 kubelet[2881]: W0121 00:58:19.725540 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.725569 kubelet[2881]: E0121 00:58:19.725547 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.725798 kubelet[2881]: E0121 00:58:19.725774 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.725798 kubelet[2881]: W0121 00:58:19.725781 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.725798 kubelet[2881]: E0121 00:58:19.725787 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.726007 kubelet[2881]: E0121 00:58:19.725996 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.726085 kubelet[2881]: W0121 00:58:19.726049 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.726085 kubelet[2881]: E0121 00:58:19.726057 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.726255 kubelet[2881]: E0121 00:58:19.726249 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.726345 kubelet[2881]: W0121 00:58:19.726273 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.726345 kubelet[2881]: E0121 00:58:19.726279 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.726544 kubelet[2881]: E0121 00:58:19.726504 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.726544 kubelet[2881]: W0121 00:58:19.726511 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.726544 kubelet[2881]: E0121 00:58:19.726518 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.726773 kubelet[2881]: E0121 00:58:19.726759 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.726857 kubelet[2881]: W0121 00:58:19.726815 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.726857 kubelet[2881]: E0121 00:58:19.726824 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.727060 kubelet[2881]: E0121 00:58:19.727022 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.727060 kubelet[2881]: W0121 00:58:19.727028 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.727060 kubelet[2881]: E0121 00:58:19.727034 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.727272 kubelet[2881]: E0121 00:58:19.727264 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.727358 kubelet[2881]: W0121 00:58:19.727318 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.727358 kubelet[2881]: E0121 00:58:19.727328 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.727523 kubelet[2881]: E0121 00:58:19.727517 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.727590 kubelet[2881]: W0121 00:58:19.727555 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.727590 kubelet[2881]: E0121 00:58:19.727562 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.727783 kubelet[2881]: E0121 00:58:19.727776 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.727940 kubelet[2881]: W0121 00:58:19.727864 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.727940 kubelet[2881]: E0121 00:58:19.727875 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.728026 kubelet[2881]: E0121 00:58:19.728021 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.728112 kubelet[2881]: W0121 00:58:19.728054 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.728112 kubelet[2881]: E0121 00:58:19.728061 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.749553 kubelet[2881]: E0121 00:58:19.749453 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.749553 kubelet[2881]: W0121 00:58:19.749472 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.749553 kubelet[2881]: E0121 00:58:19.749489 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.749983 kubelet[2881]: E0121 00:58:19.749890 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.749983 kubelet[2881]: W0121 00:58:19.749900 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.749983 kubelet[2881]: E0121 00:58:19.749910 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.750127 kubelet[2881]: E0121 00:58:19.750121 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.750187 kubelet[2881]: W0121 00:58:19.750175 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.750226 kubelet[2881]: E0121 00:58:19.750220 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.750451 kubelet[2881]: E0121 00:58:19.750391 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.750451 kubelet[2881]: W0121 00:58:19.750398 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.750451 kubelet[2881]: E0121 00:58:19.750404 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.750590 kubelet[2881]: E0121 00:58:19.750585 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.750675 kubelet[2881]: W0121 00:58:19.750623 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.750675 kubelet[2881]: E0121 00:58:19.750632 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.750780 kubelet[2881]: E0121 00:58:19.750775 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.750811 kubelet[2881]: W0121 00:58:19.750807 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.750843 kubelet[2881]: E0121 00:58:19.750838 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.751166 kubelet[2881]: E0121 00:58:19.750993 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.751166 kubelet[2881]: W0121 00:58:19.750999 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.751166 kubelet[2881]: E0121 00:58:19.751005 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.751378 kubelet[2881]: E0121 00:58:19.751371 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.751414 kubelet[2881]: W0121 00:58:19.751409 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.751448 kubelet[2881]: E0121 00:58:19.751442 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.751596 kubelet[2881]: E0121 00:58:19.751591 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.751639 kubelet[2881]: W0121 00:58:19.751633 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.751678 kubelet[2881]: E0121 00:58:19.751672 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.751804 kubelet[2881]: E0121 00:58:19.751798 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.751838 kubelet[2881]: W0121 00:58:19.751832 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.751869 kubelet[2881]: E0121 00:58:19.751864 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.752063 kubelet[2881]: E0121 00:58:19.751992 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.752063 kubelet[2881]: W0121 00:58:19.751999 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.752063 kubelet[2881]: E0121 00:58:19.752004 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.752216 kubelet[2881]: E0121 00:58:19.752210 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.752443 kubelet[2881]: W0121 00:58:19.752253 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.752443 kubelet[2881]: E0121 00:58:19.752272 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.752670 kubelet[2881]: E0121 00:58:19.752660 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.752721 kubelet[2881]: W0121 00:58:19.752714 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.752760 kubelet[2881]: E0121 00:58:19.752754 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.752967 kubelet[2881]: E0121 00:58:19.752961 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.753009 kubelet[2881]: W0121 00:58:19.753003 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.753041 kubelet[2881]: E0121 00:58:19.753036 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.753218 kubelet[2881]: E0121 00:58:19.753213 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.753291 kubelet[2881]: W0121 00:58:19.753259 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.753291 kubelet[2881]: E0121 00:58:19.753267 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.753511 kubelet[2881]: E0121 00:58:19.753456 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.753511 kubelet[2881]: W0121 00:58:19.753463 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.753511 kubelet[2881]: E0121 00:58:19.753468 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.753991 kubelet[2881]: E0121 00:58:19.753716 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.753991 kubelet[2881]: W0121 00:58:19.753723 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.753991 kubelet[2881]: E0121 00:58:19.753729 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:19.754275 kubelet[2881]: E0121 00:58:19.754262 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 21 00:58:19.754378 kubelet[2881]: W0121 00:58:19.754369 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 21 00:58:19.754459 kubelet[2881]: E0121 00:58:19.754449 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 21 00:58:20.536700 containerd[1672]: time="2026-01-21T00:58:20.536657618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:20.538027 containerd[1672]: time="2026-01-21T00:58:20.537888421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:20.539564 containerd[1672]: time="2026-01-21T00:58:20.539529602Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:20.542067 containerd[1672]: time="2026-01-21T00:58:20.542009389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:20.542584 containerd[1672]: time="2026-01-21T00:58:20.542458755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.551065731s" Jan 21 00:58:20.542584 containerd[1672]: time="2026-01-21T00:58:20.542486498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 21 00:58:20.547605 containerd[1672]: time="2026-01-21T00:58:20.547573823Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 21 00:58:20.561362 containerd[1672]: time="2026-01-21T00:58:20.560951281Z" level=info msg="Container 78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:20.571387 containerd[1672]: time="2026-01-21T00:58:20.571340567Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741\"" Jan 21 00:58:20.573235 containerd[1672]: time="2026-01-21T00:58:20.572284948Z" level=info msg="StartContainer for \"78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741\"" Jan 21 00:58:20.573840 containerd[1672]: time="2026-01-21T00:58:20.573655323Z" level=info msg="connecting to shim 78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741" address="unix:///run/containerd/s/321c0e9c2e8f66486f70a7e71e41393fe29baff708f6294a722c84a93d247a1e" protocol=ttrpc version=3 Jan 21 00:58:20.601391 systemd[1]: Started cri-containerd-78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741.scope - libcontainer container 78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741. Jan 21 00:58:20.639000 audit: BPF prog-id=166 op=LOAD Jan 21 00:58:20.639000 audit[3588]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3439 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:20.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633663326130333964326663613037626534333937333437316364 Jan 21 00:58:20.639000 audit: BPF prog-id=167 op=LOAD Jan 21 00:58:20.639000 audit[3588]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3439 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:20.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633663326130333964326663613037626534333937333437316364 Jan 21 00:58:20.639000 audit: BPF prog-id=167 op=UNLOAD Jan 21 00:58:20.639000 audit[3588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:20.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633663326130333964326663613037626534333937333437316364 Jan 21 00:58:20.639000 audit: BPF prog-id=166 op=UNLOAD Jan 21 00:58:20.639000 audit[3588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:20.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633663326130333964326663613037626534333937333437316364 Jan 21 00:58:20.640000 audit: BPF prog-id=168 op=LOAD Jan 21 00:58:20.640000 audit[3588]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3439 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:20.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738633663326130333964326663613037626534333937333437316364 Jan 21 00:58:20.661289 containerd[1672]: time="2026-01-21T00:58:20.661258012Z" level=info msg="StartContainer for \"78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741\" returns successfully" Jan 21 00:58:20.670657 systemd[1]: cri-containerd-78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741.scope: Deactivated successfully. Jan 21 00:58:20.672000 audit: BPF prog-id=168 op=UNLOAD Jan 21 00:58:20.673872 containerd[1672]: time="2026-01-21T00:58:20.673846678Z" level=info msg="received container exit event container_id:\"78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741\" id:\"78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741\" pid:3601 exited_at:{seconds:1768957100 nanos:673461863}" Jan 21 00:58:20.700608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78c6c2a039d2fca07be43973471cd15ee60ab64061a7485a92e77734fcd23741-rootfs.mount: Deactivated successfully. Jan 21 00:58:20.704169 kubelet[2881]: I0121 00:58:20.704134 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:58:20.729839 kubelet[2881]: I0121 00:58:20.729626 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7474c44f46-cvpk7" podStartSLOduration=2.255388972 podStartE2EDuration="4.729613297s" podCreationTimestamp="2026-01-21 00:58:16 +0000 UTC" firstStartedPulling="2026-01-21 00:58:16.516600359 +0000 UTC m=+19.760453621" lastFinishedPulling="2026-01-21 00:58:18.990824684 +0000 UTC m=+22.234677946" observedRunningTime="2026-01-21 00:58:19.707700726 +0000 UTC m=+22.951554009" watchObservedRunningTime="2026-01-21 00:58:20.729613297 +0000 UTC m=+23.973466637" Jan 21 00:58:21.600316 kubelet[2881]: E0121 00:58:21.600273 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:22.713905 containerd[1672]: time="2026-01-21T00:58:22.713870539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 21 00:58:23.598192 kubelet[2881]: E0121 00:58:23.597845 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:25.598481 kubelet[2881]: E0121 00:58:25.598388 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:26.460951 containerd[1672]: time="2026-01-21T00:58:26.460897052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:26.462822 containerd[1672]: time="2026-01-21T00:58:26.462777613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 21 00:58:26.464308 containerd[1672]: time="2026-01-21T00:58:26.464282327Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:26.467351 containerd[1672]: time="2026-01-21T00:58:26.467259349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:26.467607 containerd[1672]: time="2026-01-21T00:58:26.467584740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.753680307s" Jan 21 00:58:26.467652 containerd[1672]: time="2026-01-21T00:58:26.467611125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 21 00:58:26.471812 containerd[1672]: time="2026-01-21T00:58:26.471783562Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 21 00:58:26.485403 containerd[1672]: time="2026-01-21T00:58:26.485367292Z" level=info msg="Container 442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:26.489946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414435591.mount: Deactivated successfully. Jan 21 00:58:26.500597 containerd[1672]: time="2026-01-21T00:58:26.500557439Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b\"" Jan 21 00:58:26.501228 containerd[1672]: time="2026-01-21T00:58:26.501136666Z" level=info msg="StartContainer for \"442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b\"" Jan 21 00:58:26.505426 containerd[1672]: time="2026-01-21T00:58:26.505376437Z" level=info msg="connecting to shim 442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b" address="unix:///run/containerd/s/321c0e9c2e8f66486f70a7e71e41393fe29baff708f6294a722c84a93d247a1e" protocol=ttrpc version=3 Jan 21 00:58:26.525398 systemd[1]: Started cri-containerd-442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b.scope - libcontainer container 442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b. Jan 21 00:58:26.577000 audit: BPF prog-id=169 op=LOAD Jan 21 00:58:26.579638 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 21 00:58:26.579687 kernel: audit: type=1334 audit(1768957106.577:560): prog-id=169 op=LOAD Jan 21 00:58:26.577000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.582833 kernel: audit: type=1300 audit(1768957106.577:560): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.579000 audit: BPF prog-id=170 op=LOAD Jan 21 00:58:26.590657 kernel: audit: type=1327 audit(1768957106.577:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.590724 kernel: audit: type=1334 audit(1768957106.579:561): prog-id=170 op=LOAD Jan 21 00:58:26.590746 kernel: audit: type=1300 audit(1768957106.579:561): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.579000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.579000 audit: BPF prog-id=170 op=UNLOAD Jan 21 00:58:26.600442 kernel: audit: type=1327 audit(1768957106.579:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.600531 kernel: audit: type=1334 audit(1768957106.579:562): prog-id=170 op=UNLOAD Jan 21 00:58:26.579000 audit[3644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.607174 kernel: audit: type=1300 audit(1768957106.579:562): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.579000 audit: BPF prog-id=169 op=UNLOAD Jan 21 00:58:26.614818 kernel: audit: type=1327 audit(1768957106.579:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.614860 kernel: audit: type=1334 audit(1768957106.579:563): prog-id=169 op=UNLOAD Jan 21 00:58:26.579000 audit[3644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.579000 audit: BPF prog-id=171 op=LOAD Jan 21 00:58:26.579000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3439 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:26.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434326263386163653837656133623262353561326133613538356537 Jan 21 00:58:26.626647 containerd[1672]: time="2026-01-21T00:58:26.626610820Z" level=info msg="StartContainer for \"442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b\" returns successfully" Jan 21 00:58:27.599142 kubelet[2881]: E0121 00:58:27.598413 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:28.769424 containerd[1672]: time="2026-01-21T00:58:28.769378747Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 21 00:58:28.771877 systemd[1]: cri-containerd-442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b.scope: Deactivated successfully. Jan 21 00:58:28.772140 systemd[1]: cri-containerd-442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b.scope: Consumed 446ms CPU time, 193.6M memory peak, 171.3M written to disk. Jan 21 00:58:28.774074 containerd[1672]: time="2026-01-21T00:58:28.773950462Z" level=info msg="received container exit event container_id:\"442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b\" id:\"442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b\" pid:3658 exited_at:{seconds:1768957108 nanos:773797198}" Jan 21 00:58:28.774000 audit: BPF prog-id=171 op=UNLOAD Jan 21 00:58:28.794700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-442bc8ace87ea3b2b55a2a3a585e7b6cca5a79ca8c1beb54d661b9ec6ac95e1b-rootfs.mount: Deactivated successfully. Jan 21 00:58:28.866209 kubelet[2881]: I0121 00:58:28.866184 2881 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 21 00:58:29.381658 systemd[1]: Created slice kubepods-burstable-pod5a22e677_c291_41f3_b041_3887656f799c.slice - libcontainer container kubepods-burstable-pod5a22e677_c291_41f3_b041_3887656f799c.slice. Jan 21 00:58:29.413173 kubelet[2881]: I0121 00:58:29.413066 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a22e677-c291-41f3-b041-3887656f799c-config-volume\") pod \"coredns-674b8bbfcf-6rrmh\" (UID: \"5a22e677-c291-41f3-b041-3887656f799c\") " pod="kube-system/coredns-674b8bbfcf-6rrmh" Jan 21 00:58:29.413173 kubelet[2881]: I0121 00:58:29.413105 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6jb\" (UniqueName: \"kubernetes.io/projected/5a22e677-c291-41f3-b041-3887656f799c-kube-api-access-px6jb\") pod \"coredns-674b8bbfcf-6rrmh\" (UID: \"5a22e677-c291-41f3-b041-3887656f799c\") " pod="kube-system/coredns-674b8bbfcf-6rrmh" Jan 21 00:58:29.454222 systemd[1]: Created slice kubepods-burstable-podb031c9f6_678f_4f41_8918_567e415496d1.slice - libcontainer container kubepods-burstable-podb031c9f6_678f_4f41_8918_567e415496d1.slice. Jan 21 00:58:29.476360 systemd[1]: Created slice kubepods-besteffort-pod0723cca5_619b_4e7c_893b_f737ac25ba0b.slice - libcontainer container kubepods-besteffort-pod0723cca5_619b_4e7c_893b_f737ac25ba0b.slice. Jan 21 00:58:29.482612 systemd[1]: Created slice kubepods-besteffort-pod94fcfa71_810a_48c9_887b_c0a0865e89bf.slice - libcontainer container kubepods-besteffort-pod94fcfa71_810a_48c9_887b_c0a0865e89bf.slice. Jan 21 00:58:29.491069 systemd[1]: Created slice kubepods-besteffort-pod6a600d11_4238_41cf_86d6_99ea151288a7.slice - libcontainer container kubepods-besteffort-pod6a600d11_4238_41cf_86d6_99ea151288a7.slice. Jan 21 00:58:29.500049 systemd[1]: Created slice kubepods-besteffort-podab9475e3_845f_4249_abaa_5891387a4c3a.slice - libcontainer container kubepods-besteffort-podab9475e3_845f_4249_abaa_5891387a4c3a.slice. Jan 21 00:58:29.505474 systemd[1]: Created slice kubepods-besteffort-podb29f53d3_a9a7_483a_a4bb_96ad4d0f4f37.slice - libcontainer container kubepods-besteffort-podb29f53d3_a9a7_483a_a4bb_96ad4d0f4f37.slice. Jan 21 00:58:29.513559 kubelet[2881]: I0121 00:58:29.513523 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0723cca5-619b-4e7c-893b-f737ac25ba0b-tigera-ca-bundle\") pod \"calico-kube-controllers-fbd8b4d78-76pnr\" (UID: \"0723cca5-619b-4e7c-893b-f737ac25ba0b\") " pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" Jan 21 00:58:29.513559 kubelet[2881]: I0121 00:58:29.513558 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcfjs\" (UniqueName: \"kubernetes.io/projected/0723cca5-619b-4e7c-893b-f737ac25ba0b-kube-api-access-vcfjs\") pod \"calico-kube-controllers-fbd8b4d78-76pnr\" (UID: \"0723cca5-619b-4e7c-893b-f737ac25ba0b\") " pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" Jan 21 00:58:29.513762 kubelet[2881]: I0121 00:58:29.513579 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37-calico-apiserver-certs\") pod \"calico-apiserver-5489cbd567-6mfx7\" (UID: \"b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37\") " pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" Jan 21 00:58:29.513762 kubelet[2881]: I0121 00:58:29.513595 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b031c9f6-678f-4f41-8918-567e415496d1-config-volume\") pod \"coredns-674b8bbfcf-rrvtd\" (UID: \"b031c9f6-678f-4f41-8918-567e415496d1\") " pod="kube-system/coredns-674b8bbfcf-rrvtd" Jan 21 00:58:29.513762 kubelet[2881]: I0121 00:58:29.513613 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-ca-bundle\") pod \"whisker-78b98b7459-mhxcs\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " pod="calico-system/whisker-78b98b7459-mhxcs" Jan 21 00:58:29.513762 kubelet[2881]: I0121 00:58:29.513628 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqkr\" (UniqueName: \"kubernetes.io/projected/b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37-kube-api-access-rlqkr\") pod \"calico-apiserver-5489cbd567-6mfx7\" (UID: \"b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37\") " pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" Jan 21 00:58:29.513762 kubelet[2881]: I0121 00:58:29.513643 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-backend-key-pair\") pod \"whisker-78b98b7459-mhxcs\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " pod="calico-system/whisker-78b98b7459-mhxcs" Jan 21 00:58:29.513876 kubelet[2881]: I0121 00:58:29.513659 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c852q\" (UniqueName: \"kubernetes.io/projected/94fcfa71-810a-48c9-887b-c0a0865e89bf-kube-api-access-c852q\") pod \"whisker-78b98b7459-mhxcs\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " pod="calico-system/whisker-78b98b7459-mhxcs" Jan 21 00:58:29.513876 kubelet[2881]: I0121 00:58:29.513710 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab9475e3-845f-4249-abaa-5891387a4c3a-goldmane-key-pair\") pod \"goldmane-666569f655-4stx9\" (UID: \"ab9475e3-845f-4249-abaa-5891387a4c3a\") " pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.513876 kubelet[2881]: I0121 00:58:29.513737 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a600d11-4238-41cf-86d6-99ea151288a7-calico-apiserver-certs\") pod \"calico-apiserver-5489cbd567-xwgzw\" (UID: \"6a600d11-4238-41cf-86d6-99ea151288a7\") " pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" Jan 21 00:58:29.514593 kubelet[2881]: I0121 00:58:29.514565 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t257m\" (UniqueName: \"kubernetes.io/projected/6a600d11-4238-41cf-86d6-99ea151288a7-kube-api-access-t257m\") pod \"calico-apiserver-5489cbd567-xwgzw\" (UID: \"6a600d11-4238-41cf-86d6-99ea151288a7\") " pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" Jan 21 00:58:29.514625 kubelet[2881]: I0121 00:58:29.514598 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9475e3-845f-4249-abaa-5891387a4c3a-goldmane-ca-bundle\") pod \"goldmane-666569f655-4stx9\" (UID: \"ab9475e3-845f-4249-abaa-5891387a4c3a\") " pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.514625 kubelet[2881]: I0121 00:58:29.514614 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab9475e3-845f-4249-abaa-5891387a4c3a-config\") pod \"goldmane-666569f655-4stx9\" (UID: \"ab9475e3-845f-4249-abaa-5891387a4c3a\") " pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.514672 kubelet[2881]: I0121 00:58:29.514629 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2jpr\" (UniqueName: \"kubernetes.io/projected/ab9475e3-845f-4249-abaa-5891387a4c3a-kube-api-access-w2jpr\") pod \"goldmane-666569f655-4stx9\" (UID: \"ab9475e3-845f-4249-abaa-5891387a4c3a\") " pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.514672 kubelet[2881]: I0121 00:58:29.514645 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4kg\" (UniqueName: \"kubernetes.io/projected/b031c9f6-678f-4f41-8918-567e415496d1-kube-api-access-bj4kg\") pod \"coredns-674b8bbfcf-rrvtd\" (UID: \"b031c9f6-678f-4f41-8918-567e415496d1\") " pod="kube-system/coredns-674b8bbfcf-rrvtd" Jan 21 00:58:29.602913 systemd[1]: Created slice kubepods-besteffort-pod4683584a_9c9b_48ab_9b3d_c5a314d23b04.slice - libcontainer container kubepods-besteffort-pod4683584a_9c9b_48ab_9b3d_c5a314d23b04.slice. Jan 21 00:58:29.604647 containerd[1672]: time="2026-01-21T00:58:29.604612666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gng52,Uid:4683584a-9c9b-48ab-9b3d-c5a314d23b04,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:29.683366 containerd[1672]: time="2026-01-21T00:58:29.683315575Z" level=error msg="Failed to destroy network for sandbox \"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.686346 containerd[1672]: time="2026-01-21T00:58:29.686260613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rrmh,Uid:5a22e677-c291-41f3-b041-3887656f799c,Namespace:kube-system,Attempt:0,}" Jan 21 00:58:29.686853 containerd[1672]: time="2026-01-21T00:58:29.686692422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gng52,Uid:4683584a-9c9b-48ab-9b3d-c5a314d23b04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.687077 kubelet[2881]: E0121 00:58:29.686879 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.687077 kubelet[2881]: E0121 00:58:29.686947 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:29.687077 kubelet[2881]: E0121 00:58:29.686967 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gng52" Jan 21 00:58:29.687312 kubelet[2881]: E0121 00:58:29.687030 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcbee1b4ab672b1dc156020c921a0ef068d8318277f8480dc69d2cf23e5258b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:29.734248 containerd[1672]: time="2026-01-21T00:58:29.734149232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 21 00:58:29.740970 containerd[1672]: time="2026-01-21T00:58:29.740859799Z" level=error msg="Failed to destroy network for sandbox \"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.744997 containerd[1672]: time="2026-01-21T00:58:29.744935744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rrmh,Uid:5a22e677-c291-41f3-b041-3887656f799c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.745487 kubelet[2881]: E0121 00:58:29.745455 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.745547 kubelet[2881]: E0121 00:58:29.745509 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rrmh" Jan 21 00:58:29.745547 kubelet[2881]: E0121 00:58:29.745530 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rrmh" Jan 21 00:58:29.745608 kubelet[2881]: E0121 00:58:29.745574 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rrmh_kube-system(5a22e677-c291-41f3-b041-3887656f799c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rrmh_kube-system(5a22e677-c291-41f3-b041-3887656f799c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56da4b2f61f5f27dd5cadb7940f42fcfc3d40a81b67046ac559fcee3636cf77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rrmh" podUID="5a22e677-c291-41f3-b041-3887656f799c" Jan 21 00:58:29.758945 containerd[1672]: time="2026-01-21T00:58:29.758775678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvtd,Uid:b031c9f6-678f-4f41-8918-567e415496d1,Namespace:kube-system,Attempt:0,}" Jan 21 00:58:29.782683 containerd[1672]: time="2026-01-21T00:58:29.782648622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fbd8b4d78-76pnr,Uid:0723cca5-619b-4e7c-893b-f737ac25ba0b,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:29.789452 containerd[1672]: time="2026-01-21T00:58:29.789413711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b98b7459-mhxcs,Uid:94fcfa71-810a-48c9-887b-c0a0865e89bf,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:29.798560 containerd[1672]: time="2026-01-21T00:58:29.798492982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-xwgzw,Uid:6a600d11-4238-41cf-86d6-99ea151288a7,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:58:29.807004 systemd[1]: run-netns-cni\x2d96aca705\x2d41c2\x2dc8a2\x2d63d6\x2d61a6fec08858.mount: Deactivated successfully. Jan 21 00:58:29.807962 containerd[1672]: time="2026-01-21T00:58:29.807933767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-6mfx7,Uid:b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:58:29.810250 containerd[1672]: time="2026-01-21T00:58:29.808001453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4stx9,Uid:ab9475e3-845f-4249-abaa-5891387a4c3a,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:29.887703 containerd[1672]: time="2026-01-21T00:58:29.887658769Z" level=error msg="Failed to destroy network for sandbox \"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.892307 containerd[1672]: time="2026-01-21T00:58:29.892269445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvtd,Uid:b031c9f6-678f-4f41-8918-567e415496d1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.892768 kubelet[2881]: E0121 00:58:29.892667 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.892768 kubelet[2881]: E0121 00:58:29.892716 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rrvtd" Jan 21 00:58:29.892768 kubelet[2881]: E0121 00:58:29.892738 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rrvtd" Jan 21 00:58:29.894436 kubelet[2881]: E0121 00:58:29.893257 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rrvtd_kube-system(b031c9f6-678f-4f41-8918-567e415496d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rrvtd_kube-system(b031c9f6-678f-4f41-8918-567e415496d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6f5f5a0b1cc44b563c02bc67d5c0d3a04a8be88d37eca5dc6e981a5e3874c9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rrvtd" podUID="b031c9f6-678f-4f41-8918-567e415496d1" Jan 21 00:58:29.929868 containerd[1672]: time="2026-01-21T00:58:29.929807348Z" level=error msg="Failed to destroy network for sandbox \"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.933716 containerd[1672]: time="2026-01-21T00:58:29.933573331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fbd8b4d78-76pnr,Uid:0723cca5-619b-4e7c-893b-f737ac25ba0b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.934412 kubelet[2881]: E0121 00:58:29.933782 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.934412 kubelet[2881]: E0121 00:58:29.933828 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" Jan 21 00:58:29.934412 kubelet[2881]: E0121 00:58:29.933846 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" Jan 21 00:58:29.936016 kubelet[2881]: E0121 00:58:29.934302 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02e9f910beefca2eae621a52cdfcfc38ee6383008f620cc6437bd832f49bae06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:58:29.960058 containerd[1672]: time="2026-01-21T00:58:29.960013661Z" level=error msg="Failed to destroy network for sandbox \"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.964197 containerd[1672]: time="2026-01-21T00:58:29.964078553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b98b7459-mhxcs,Uid:94fcfa71-810a-48c9-887b-c0a0865e89bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.964687 kubelet[2881]: E0121 00:58:29.964659 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.964763 kubelet[2881]: E0121 00:58:29.964708 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78b98b7459-mhxcs" Jan 21 00:58:29.964763 kubelet[2881]: E0121 00:58:29.964725 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78b98b7459-mhxcs" Jan 21 00:58:29.966369 kubelet[2881]: E0121 00:58:29.964772 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78b98b7459-mhxcs_calico-system(94fcfa71-810a-48c9-887b-c0a0865e89bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78b98b7459-mhxcs_calico-system(94fcfa71-810a-48c9-887b-c0a0865e89bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc271261ec82efb5a68830466cf3d7b9af00c33d9357059f43baa28d2c159930\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78b98b7459-mhxcs" podUID="94fcfa71-810a-48c9-887b-c0a0865e89bf" Jan 21 00:58:29.967172 containerd[1672]: time="2026-01-21T00:58:29.967111518Z" level=error msg="Failed to destroy network for sandbox \"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.968759 containerd[1672]: time="2026-01-21T00:58:29.968728298Z" level=error msg="Failed to destroy network for sandbox \"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.972067 containerd[1672]: time="2026-01-21T00:58:29.971022708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4stx9,Uid:ab9475e3-845f-4249-abaa-5891387a4c3a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.972185 kubelet[2881]: E0121 00:58:29.971881 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.972185 kubelet[2881]: E0121 00:58:29.971927 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.972185 kubelet[2881]: E0121 00:58:29.971953 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4stx9" Jan 21 00:58:29.972278 kubelet[2881]: E0121 00:58:29.971996 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575ad56d7f69575653accf2a248a6e432863ccae0456965a47465545880d2820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:58:29.974345 containerd[1672]: time="2026-01-21T00:58:29.974305547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-xwgzw,Uid:6a600d11-4238-41cf-86d6-99ea151288a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.974579 kubelet[2881]: E0121 00:58:29.974553 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.974719 kubelet[2881]: E0121 00:58:29.974665 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" Jan 21 00:58:29.974719 kubelet[2881]: E0121 00:58:29.974690 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" Jan 21 00:58:29.974880 kubelet[2881]: E0121 00:58:29.974861 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b99da67f797aa2353d74ea51305b97dafcc6d25ac25aa136a6ad5c91cabb55f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:58:29.975120 containerd[1672]: time="2026-01-21T00:58:29.975018594Z" level=error msg="Failed to destroy network for sandbox \"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.978426 containerd[1672]: time="2026-01-21T00:58:29.978399416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-6mfx7,Uid:b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.978646 kubelet[2881]: E0121 00:58:29.978627 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 21 00:58:29.978753 kubelet[2881]: E0121 00:58:29.978739 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" Jan 21 00:58:29.978814 kubelet[2881]: E0121 00:58:29.978804 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" Jan 21 00:58:29.978971 kubelet[2881]: E0121 00:58:29.978914 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7614814e081b3b16a86bf26eb893b46b548eaa84dabebd2083aeb9bf0bddf740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:58:30.795010 systemd[1]: run-netns-cni\x2df513e396\x2d08d6\x2dc5a8\x2d7d18\x2dd7299bc69526.mount: Deactivated successfully. Jan 21 00:58:30.795102 systemd[1]: run-netns-cni\x2daa8a3115\x2de6bd\x2de8f9\x2d0980\x2df29a2d596504.mount: Deactivated successfully. Jan 21 00:58:30.795150 systemd[1]: run-netns-cni\x2d3439a590\x2d220b\x2d2510\x2d1f06\x2dbb455b66f091.mount: Deactivated successfully. Jan 21 00:58:30.797225 systemd[1]: run-netns-cni\x2daf4b6097\x2d49a3\x2df73e\x2d2c63\x2dad7d7d4f8de5.mount: Deactivated successfully. Jan 21 00:58:30.797287 systemd[1]: run-netns-cni\x2dfc176969\x2d71b9\x2de251\x2d9dd7\x2d4159cd6fa1fb.mount: Deactivated successfully. Jan 21 00:58:30.797333 systemd[1]: run-netns-cni\x2d287b5154\x2d4c28\x2d9843\x2d6671\x2dacfc1a55e8c9.mount: Deactivated successfully. Jan 21 00:58:36.565477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842529576.mount: Deactivated successfully. Jan 21 00:58:36.595570 containerd[1672]: time="2026-01-21T00:58:36.595507488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:36.596963 containerd[1672]: time="2026-01-21T00:58:36.596812553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 21 00:58:36.598536 containerd[1672]: time="2026-01-21T00:58:36.598504336Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:36.600682 containerd[1672]: time="2026-01-21T00:58:36.600656527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 21 00:58:36.601111 containerd[1672]: time="2026-01-21T00:58:36.601087095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.866880335s" Jan 21 00:58:36.601198 containerd[1672]: time="2026-01-21T00:58:36.601186326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 21 00:58:36.625081 containerd[1672]: time="2026-01-21T00:58:36.625043628Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 21 00:58:36.640404 containerd[1672]: time="2026-01-21T00:58:36.640360730Z" level=info msg="Container 311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:36.658678 containerd[1672]: time="2026-01-21T00:58:36.658604221Z" level=info msg="CreateContainer within sandbox \"dcea27ed180fb8aebb5baec1311cab01b5f957a93d00a9420001605e6a4fe85e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240\"" Jan 21 00:58:36.659339 containerd[1672]: time="2026-01-21T00:58:36.659310790Z" level=info msg="StartContainer for \"311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240\"" Jan 21 00:58:36.662169 containerd[1672]: time="2026-01-21T00:58:36.662124366Z" level=info msg="connecting to shim 311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240" address="unix:///run/containerd/s/321c0e9c2e8f66486f70a7e71e41393fe29baff708f6294a722c84a93d247a1e" protocol=ttrpc version=3 Jan 21 00:58:36.719361 systemd[1]: Started cri-containerd-311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240.scope - libcontainer container 311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240. Jan 21 00:58:36.769000 audit: BPF prog-id=172 op=LOAD Jan 21 00:58:36.771624 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 21 00:58:36.771672 kernel: audit: type=1334 audit(1768957116.769:566): prog-id=172 op=LOAD Jan 21 00:58:36.769000 audit[3913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.775359 kernel: audit: type=1300 audit(1768957116.769:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.779505 kernel: audit: type=1327 audit(1768957116.769:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.772000 audit: BPF prog-id=173 op=LOAD Jan 21 00:58:36.782457 kernel: audit: type=1334 audit(1768957116.772:567): prog-id=173 op=LOAD Jan 21 00:58:36.772000 audit[3913]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.789317 kernel: audit: type=1300 audit(1768957116.772:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.789380 kernel: audit: type=1327 audit(1768957116.772:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.772000 audit: BPF prog-id=173 op=UNLOAD Jan 21 00:58:36.792366 kernel: audit: type=1334 audit(1768957116.772:568): prog-id=173 op=UNLOAD Jan 21 00:58:36.772000 audit[3913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.799927 kernel: audit: type=1300 audit(1768957116.772:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.799987 kernel: audit: type=1327 audit(1768957116.772:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.772000 audit: BPF prog-id=172 op=UNLOAD Jan 21 00:58:36.772000 audit[3913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.772000 audit: BPF prog-id=174 op=LOAD Jan 21 00:58:36.772000 audit[3913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3439 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:36.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331313630373036316566393230623232333464366634336433613937 Jan 21 00:58:36.804172 kernel: audit: type=1334 audit(1768957116.772:569): prog-id=172 op=UNLOAD Jan 21 00:58:36.818995 containerd[1672]: time="2026-01-21T00:58:36.818455439Z" level=info msg="StartContainer for \"311607061ef920b2234d6f43d3a976957a0d68ca42b3a1fb9611b404d6d30240\" returns successfully" Jan 21 00:58:36.908665 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 21 00:58:36.908776 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 21 00:58:37.062862 kubelet[2881]: I0121 00:58:37.062680 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c852q\" (UniqueName: \"kubernetes.io/projected/94fcfa71-810a-48c9-887b-c0a0865e89bf-kube-api-access-c852q\") pod \"94fcfa71-810a-48c9-887b-c0a0865e89bf\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " Jan 21 00:58:37.062862 kubelet[2881]: I0121 00:58:37.062811 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-ca-bundle\") pod \"94fcfa71-810a-48c9-887b-c0a0865e89bf\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " Jan 21 00:58:37.062862 kubelet[2881]: I0121 00:58:37.062834 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-backend-key-pair\") pod \"94fcfa71-810a-48c9-887b-c0a0865e89bf\" (UID: \"94fcfa71-810a-48c9-887b-c0a0865e89bf\") " Jan 21 00:58:37.065852 kubelet[2881]: I0121 00:58:37.065803 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "94fcfa71-810a-48c9-887b-c0a0865e89bf" (UID: "94fcfa71-810a-48c9-887b-c0a0865e89bf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 00:58:37.071430 kubelet[2881]: I0121 00:58:37.071343 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fcfa71-810a-48c9-887b-c0a0865e89bf-kube-api-access-c852q" (OuterVolumeSpecName: "kube-api-access-c852q") pod "94fcfa71-810a-48c9-887b-c0a0865e89bf" (UID: "94fcfa71-810a-48c9-887b-c0a0865e89bf"). InnerVolumeSpecName "kube-api-access-c852q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 00:58:37.071628 kubelet[2881]: I0121 00:58:37.071573 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "94fcfa71-810a-48c9-887b-c0a0865e89bf" (UID: "94fcfa71-810a-48c9-887b-c0a0865e89bf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 00:58:37.164345 kubelet[2881]: I0121 00:58:37.164284 2881 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c852q\" (UniqueName: \"kubernetes.io/projected/94fcfa71-810a-48c9-887b-c0a0865e89bf-kube-api-access-c852q\") on node \"ci-4547-0-0-n-1ed4874c6e\" DevicePath \"\"" Jan 21 00:58:37.164345 kubelet[2881]: I0121 00:58:37.164311 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-ca-bundle\") on node \"ci-4547-0-0-n-1ed4874c6e\" DevicePath \"\"" Jan 21 00:58:37.164345 kubelet[2881]: I0121 00:58:37.164320 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94fcfa71-810a-48c9-887b-c0a0865e89bf-whisker-backend-key-pair\") on node \"ci-4547-0-0-n-1ed4874c6e\" DevicePath \"\"" Jan 21 00:58:37.566543 systemd[1]: var-lib-kubelet-pods-94fcfa71\x2d810a\x2d48c9\x2d887b\x2dc0a0865e89bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc852q.mount: Deactivated successfully. Jan 21 00:58:37.566909 systemd[1]: var-lib-kubelet-pods-94fcfa71\x2d810a\x2d48c9\x2d887b\x2dc0a0865e89bf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 21 00:58:37.603758 systemd[1]: Removed slice kubepods-besteffort-pod94fcfa71_810a_48c9_887b_c0a0865e89bf.slice - libcontainer container kubepods-besteffort-pod94fcfa71_810a_48c9_887b_c0a0865e89bf.slice. Jan 21 00:58:37.782297 kubelet[2881]: I0121 00:58:37.782016 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bgbpm" podStartSLOduration=1.861924656 podStartE2EDuration="21.782003548s" podCreationTimestamp="2026-01-21 00:58:16 +0000 UTC" firstStartedPulling="2026-01-21 00:58:16.681830223 +0000 UTC m=+19.925683484" lastFinishedPulling="2026-01-21 00:58:36.601909114 +0000 UTC m=+39.845762376" observedRunningTime="2026-01-21 00:58:37.781991209 +0000 UTC m=+41.025844492" watchObservedRunningTime="2026-01-21 00:58:37.782003548 +0000 UTC m=+41.025856828" Jan 21 00:58:37.852781 systemd[1]: Created slice kubepods-besteffort-pod181c2eed_c9fe_4d1f_ab58_c3add0b057f7.slice - libcontainer container kubepods-besteffort-pod181c2eed_c9fe_4d1f_ab58_c3add0b057f7.slice. Jan 21 00:58:37.969713 kubelet[2881]: I0121 00:58:37.969653 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6snb\" (UniqueName: \"kubernetes.io/projected/181c2eed-c9fe-4d1f-ab58-c3add0b057f7-kube-api-access-p6snb\") pod \"whisker-5d559fddc-6x8n7\" (UID: \"181c2eed-c9fe-4d1f-ab58-c3add0b057f7\") " pod="calico-system/whisker-5d559fddc-6x8n7" Jan 21 00:58:37.969713 kubelet[2881]: I0121 00:58:37.969722 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181c2eed-c9fe-4d1f-ab58-c3add0b057f7-whisker-backend-key-pair\") pod \"whisker-5d559fddc-6x8n7\" (UID: \"181c2eed-c9fe-4d1f-ab58-c3add0b057f7\") " pod="calico-system/whisker-5d559fddc-6x8n7" Jan 21 00:58:37.969935 kubelet[2881]: I0121 00:58:37.969737 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181c2eed-c9fe-4d1f-ab58-c3add0b057f7-whisker-ca-bundle\") pod \"whisker-5d559fddc-6x8n7\" (UID: \"181c2eed-c9fe-4d1f-ab58-c3add0b057f7\") " pod="calico-system/whisker-5d559fddc-6x8n7" Jan 21 00:58:38.157811 containerd[1672]: time="2026-01-21T00:58:38.157560543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d559fddc-6x8n7,Uid:181c2eed-c9fe-4d1f-ab58-c3add0b057f7,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:38.458786 kubelet[2881]: I0121 00:58:38.458443 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 00:58:38.988000 audit[4141]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:38.988000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff0dbb76d0 a2=0 a3=7fff0dbb76bc items=0 ppid=3027 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:38.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:38.993000 audit[4141]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:38.993000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff0dbb76d0 a2=0 a3=7fff0dbb76bc items=0 ppid=3027 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:38.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:39.023765 systemd-networkd[1556]: cali4868517131f: Link UP Jan 21 00:58:39.024008 systemd-networkd[1556]: cali4868517131f: Gained carrier Jan 21 00:58:39.096911 containerd[1672]: 2026-01-21 00:58:38.208 [INFO][4004] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 21 00:58:39.096911 containerd[1672]: 2026-01-21 00:58:38.337 [INFO][4004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0 whisker-5d559fddc- calico-system 181c2eed-c9fe-4d1f-ab58-c3add0b057f7 896 0 2026-01-21 00:58:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d559fddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e whisker-5d559fddc-6x8n7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4868517131f [] [] }} ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-" Jan 21 00:58:39.096911 containerd[1672]: 2026-01-21 00:58:38.338 [INFO][4004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.096911 containerd[1672]: 2026-01-21 00:58:38.387 [INFO][4100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" HandleID="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.387 [INFO][4100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" HandleID="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"whisker-5d559fddc-6x8n7", "timestamp":"2026-01-21 00:58:38.387415687 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.387 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.387 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.387 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.397 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.412 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.418 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.420 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097360 containerd[1672]: 2026-01-21 00:58:38.422 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.422 [INFO][4100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.424 [INFO][4100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62 Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.432 [INFO][4100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.438 [INFO][4100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.129/26] block=192.168.4.128/26 handle="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.438 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.129/26] handle="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.438 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:39.097591 containerd[1672]: 2026-01-21 00:58:38.438 [INFO][4100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.129/26] IPv6=[] ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" HandleID="k8s-pod-network.1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.097756 containerd[1672]: 2026-01-21 00:58:38.442 [INFO][4004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0", GenerateName:"whisker-5d559fddc-", Namespace:"calico-system", SelfLink:"", UID:"181c2eed-c9fe-4d1f-ab58-c3add0b057f7", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d559fddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"whisker-5d559fddc-6x8n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4868517131f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:39.097756 containerd[1672]: 2026-01-21 00:58:38.442 [INFO][4004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.129/32] ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.097840 containerd[1672]: 2026-01-21 00:58:38.442 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4868517131f ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.097840 containerd[1672]: 2026-01-21 00:58:39.082 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.097894 containerd[1672]: 2026-01-21 00:58:39.083 [INFO][4004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0", GenerateName:"whisker-5d559fddc-", Namespace:"calico-system", SelfLink:"", UID:"181c2eed-c9fe-4d1f-ab58-c3add0b057f7", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d559fddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62", Pod:"whisker-5d559fddc-6x8n7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4868517131f", MAC:"f6:e5:ec:65:e0:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:39.097952 containerd[1672]: 2026-01-21 00:58:39.095 [INFO][4004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" Namespace="calico-system" Pod="whisker-5d559fddc-6x8n7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-whisker--5d559fddc--6x8n7-eth0" Jan 21 00:58:39.145912 containerd[1672]: time="2026-01-21T00:58:39.145872714Z" level=info msg="connecting to shim 1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62" address="unix:///run/containerd/s/abf82729a6f952120b4e7f6a1301868064993f3b6e95a3d3ca8cd57f1233c6e2" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:39.170392 systemd[1]: Started cri-containerd-1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62.scope - libcontainer container 1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62. Jan 21 00:58:39.179000 audit: BPF prog-id=175 op=LOAD Jan 21 00:58:39.180000 audit: BPF prog-id=176 op=LOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=176 op=UNLOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=177 op=LOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=178 op=LOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=178 op=UNLOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=177 op=UNLOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.180000 audit: BPF prog-id=179 op=LOAD Jan 21 00:58:39.180000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4154 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137363666376233383135306430633966376331303431363538323432 Jan 21 00:58:39.214997 containerd[1672]: time="2026-01-21T00:58:39.214717829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d559fddc-6x8n7,Uid:181c2eed-c9fe-4d1f-ab58-c3add0b057f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1766f7b38150d0c9f7c1041658242f50147636d4364707a8bccdb3bd000a6c62\"" Jan 21 00:58:39.216994 containerd[1672]: time="2026-01-21T00:58:39.216970265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 00:58:39.545462 containerd[1672]: time="2026-01-21T00:58:39.545410974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:39.547582 containerd[1672]: time="2026-01-21T00:58:39.547535740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:39.547582 containerd[1672]: time="2026-01-21T00:58:39.547565778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 00:58:39.548581 kubelet[2881]: E0121 00:58:39.547987 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:58:39.548581 kubelet[2881]: E0121 00:58:39.548033 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:58:39.553007 kubelet[2881]: E0121 00:58:39.552889 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:adf2c7942adb4caa8cbe3abb5e35591e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:39.555065 containerd[1672]: time="2026-01-21T00:58:39.555036554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 00:58:39.601231 kubelet[2881]: I0121 00:58:39.601200 2881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fcfa71-810a-48c9-887b-c0a0865e89bf" path="/var/lib/kubelet/pods/94fcfa71-810a-48c9-887b-c0a0865e89bf/volumes" Jan 21 00:58:39.726000 audit: BPF prog-id=180 op=LOAD Jan 21 00:58:39.726000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5245000 a2=98 a3=1fffffffffffffff items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.726000 audit: BPF prog-id=180 op=UNLOAD Jan 21 00:58:39.726000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff5244fd0 a3=0 items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.726000 audit: BPF prog-id=181 op=LOAD Jan 21 00:58:39.726000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5244ee0 a2=94 a3=3 items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.726000 audit: BPF prog-id=181 op=UNLOAD Jan 21 00:58:39.726000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff5244ee0 a2=94 a3=3 items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.726000 audit: BPF prog-id=182 op=LOAD Jan 21 00:58:39.726000 audit[4244]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff5244f20 a2=94 a3=7ffff5245100 items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.727000 audit: BPF prog-id=182 op=UNLOAD Jan 21 00:58:39.727000 audit[4244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff5244f20 a2=94 a3=7ffff5245100 items=0 ppid=4194 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 21 00:58:39.729000 audit: BPF prog-id=183 op=LOAD Jan 21 00:58:39.729000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef357a4a0 a2=98 a3=3 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.729000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.730000 audit: BPF prog-id=183 op=UNLOAD Jan 21 00:58:39.730000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffef357a470 a3=0 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.730000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.730000 audit: BPF prog-id=184 op=LOAD Jan 21 00:58:39.730000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef357a290 a2=94 a3=54428f items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.730000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.730000 audit: BPF prog-id=184 op=UNLOAD Jan 21 00:58:39.730000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef357a290 a2=94 a3=54428f items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.730000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.730000 audit: BPF prog-id=185 op=LOAD Jan 21 00:58:39.730000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef357a2c0 a2=94 a3=2 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.730000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.730000 audit: BPF prog-id=185 op=UNLOAD Jan 21 00:58:39.730000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef357a2c0 a2=0 a3=2 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.730000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.895183 containerd[1672]: time="2026-01-21T00:58:39.895050475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:39.898069 containerd[1672]: time="2026-01-21T00:58:39.897961221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 00:58:39.898069 containerd[1672]: time="2026-01-21T00:58:39.898004125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:39.900039 kubelet[2881]: E0121 00:58:39.899825 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:58:39.900039 kubelet[2881]: E0121 00:58:39.899884 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:58:39.900164 kubelet[2881]: E0121 00:58:39.900004 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:39.901403 kubelet[2881]: E0121 00:58:39.901350 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:58:39.917000 audit: BPF prog-id=186 op=LOAD Jan 21 00:58:39.917000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef357a180 a2=94 a3=1 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.917000 audit: BPF prog-id=186 op=UNLOAD Jan 21 00:58:39.917000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef357a180 a2=94 a3=1 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.917000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.927000 audit: BPF prog-id=187 op=LOAD Jan 21 00:58:39.927000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef357a170 a2=94 a3=4 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.927000 audit: BPF prog-id=187 op=UNLOAD Jan 21 00:58:39.927000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef357a170 a2=0 a3=4 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.927000 audit: BPF prog-id=188 op=LOAD Jan 21 00:58:39.927000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef3579fd0 a2=94 a3=5 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.927000 audit: BPF prog-id=188 op=UNLOAD Jan 21 00:58:39.927000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef3579fd0 a2=0 a3=5 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=189 op=LOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef357a1f0 a2=94 a3=6 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=189 op=UNLOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef357a1f0 a2=0 a3=6 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=190 op=LOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef35799a0 a2=94 a3=88 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=191 op=LOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffef3579820 a2=94 a3=2 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=191 op=UNLOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffef3579850 a2=0 a3=7ffef3579950 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.928000 audit: BPF prog-id=190 op=UNLOAD Jan 21 00:58:39.928000 audit[4245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=38f81d10 a2=0 a3=4e29c5a7a1ed87c4 items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 21 00:58:39.936000 audit: BPF prog-id=192 op=LOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe64ceefa0 a2=98 a3=1999999999999999 items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.936000 audit: BPF prog-id=192 op=UNLOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe64ceef70 a3=0 items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.936000 audit: BPF prog-id=193 op=LOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe64ceee80 a2=94 a3=ffff items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.936000 audit: BPF prog-id=193 op=UNLOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe64ceee80 a2=94 a3=ffff items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.936000 audit: BPF prog-id=194 op=LOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe64ceeec0 a2=94 a3=7ffe64cef0a0 items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.936000 audit: BPF prog-id=194 op=UNLOAD Jan 21 00:58:39.936000 audit[4249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe64ceeec0 a2=94 a3=7ffe64cef0a0 items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:39.936000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 21 00:58:39.999267 systemd-networkd[1556]: vxlan.calico: Link UP Jan 21 00:58:40.000309 systemd-networkd[1556]: vxlan.calico: Gained carrier Jan 21 00:58:40.035000 audit: BPF prog-id=195 op=LOAD Jan 21 00:58:40.035000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffce7a7130 a2=98 a3=0 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.035000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=195 op=UNLOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffce7a7100 a3=0 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=196 op=LOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffce7a6f40 a2=94 a3=54428f items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=196 op=UNLOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffce7a6f40 a2=94 a3=54428f items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=197 op=LOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffce7a6f70 a2=94 a3=2 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=197 op=UNLOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffce7a6f70 a2=0 a3=2 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=198 op=LOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffce7a6d20 a2=94 a3=4 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=198 op=UNLOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffce7a6d20 a2=94 a3=4 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=199 op=LOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffce7a6e20 a2=94 a3=7fffce7a6fa0 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.037000 audit: BPF prog-id=199 op=UNLOAD Jan 21 00:58:40.037000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffce7a6e20 a2=0 a3=7fffce7a6fa0 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.037000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.040000 audit: BPF prog-id=200 op=LOAD Jan 21 00:58:40.040000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffce7a6550 a2=94 a3=2 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.040000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.040000 audit: BPF prog-id=200 op=UNLOAD Jan 21 00:58:40.040000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffce7a6550 a2=0 a3=2 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.040000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.040000 audit: BPF prog-id=201 op=LOAD Jan 21 00:58:40.040000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffce7a6650 a2=94 a3=30 items=0 ppid=4194 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.040000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 21 00:58:40.046000 audit: BPF prog-id=202 op=LOAD Jan 21 00:58:40.046000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6d573b70 a2=98 a3=0 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.046000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.048000 audit: BPF prog-id=202 op=UNLOAD Jan 21 00:58:40.048000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6d573b40 a3=0 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.048000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.049000 audit: BPF prog-id=203 op=LOAD Jan 21 00:58:40.049000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d573960 a2=94 a3=54428f items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.049000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.050000 audit: BPF prog-id=203 op=UNLOAD Jan 21 00:58:40.050000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d573960 a2=94 a3=54428f items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.050000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.050000 audit: BPF prog-id=204 op=LOAD Jan 21 00:58:40.050000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d573990 a2=94 a3=2 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.050000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.050000 audit: BPF prog-id=204 op=UNLOAD Jan 21 00:58:40.050000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d573990 a2=0 a3=2 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.050000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.089647 systemd-networkd[1556]: cali4868517131f: Gained IPv6LL Jan 21 00:58:40.216000 audit: BPF prog-id=205 op=LOAD Jan 21 00:58:40.216000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d573850 a2=94 a3=1 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.218000 audit: BPF prog-id=205 op=UNLOAD Jan 21 00:58:40.218000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d573850 a2=94 a3=1 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.218000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=206 op=LOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d573840 a2=94 a3=4 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=206 op=UNLOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6d573840 a2=0 a3=4 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=207 op=LOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6d5736a0 a2=94 a3=5 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=207 op=UNLOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6d5736a0 a2=0 a3=5 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=208 op=LOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d5738c0 a2=94 a3=6 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=208 op=UNLOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6d5738c0 a2=0 a3=6 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.229000 audit: BPF prog-id=209 op=LOAD Jan 21 00:58:40.229000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d573070 a2=94 a3=88 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.230000 audit: BPF prog-id=210 op=LOAD Jan 21 00:58:40.230000 audit[4278]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6d572ef0 a2=94 a3=2 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.230000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.230000 audit: BPF prog-id=210 op=UNLOAD Jan 21 00:58:40.230000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6d572f20 a2=0 a3=7fff6d573020 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.230000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.230000 audit: BPF prog-id=209 op=UNLOAD Jan 21 00:58:40.230000 audit[4278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=13801d10 a2=0 a3=a14e81562e9823c6 items=0 ppid=4194 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.230000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 21 00:58:40.234000 audit: BPF prog-id=201 op=UNLOAD Jan 21 00:58:40.234000 audit[4194]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000dd6780 a2=0 a3=0 items=0 ppid=4020 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.234000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 21 00:58:40.281000 audit[4301]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4301 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.281000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdcd591e80 a2=0 a3=7ffdcd591e6c items=0 ppid=4194 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.281000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.288000 audit[4304]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4304 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.288000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffecff64100 a2=0 a3=7ffecff640ec items=0 ppid=4194 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.288000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.291000 audit[4302]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.291000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd593b5fc0 a2=0 a3=7ffd593b5fac items=0 ppid=4194 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.298000 audit[4305]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.298000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffede543970 a2=0 a3=7ffede54395c items=0 ppid=4194 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.298000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.600606 containerd[1672]: time="2026-01-21T00:58:40.600373258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gng52,Uid:4683584a-9c9b-48ab-9b3d-c5a314d23b04,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:40.602182 containerd[1672]: time="2026-01-21T00:58:40.601205637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvtd,Uid:b031c9f6-678f-4f41-8918-567e415496d1,Namespace:kube-system,Attempt:0,}" Jan 21 00:58:40.602182 containerd[1672]: time="2026-01-21T00:58:40.601279817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fbd8b4d78-76pnr,Uid:0723cca5-619b-4e7c-893b-f737ac25ba0b,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:40.745794 systemd-networkd[1556]: calie9c7eb86e24: Link UP Jan 21 00:58:40.746431 systemd-networkd[1556]: calie9c7eb86e24: Gained carrier Jan 21 00:58:40.761066 containerd[1672]: 2026-01-21 00:58:40.674 [INFO][4320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0 coredns-674b8bbfcf- kube-system b031c9f6-678f-4f41-8918-567e415496d1 820 0 2026-01-21 00:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e coredns-674b8bbfcf-rrvtd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie9c7eb86e24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-" Jan 21 00:58:40.761066 containerd[1672]: 2026-01-21 00:58:40.674 [INFO][4320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761066 containerd[1672]: 2026-01-21 00:58:40.707 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" HandleID="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.707 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" HandleID="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"coredns-674b8bbfcf-rrvtd", "timestamp":"2026-01-21 00:58:40.707128884 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.707 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.707 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.707 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.715 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.722 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.725 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.727 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761435 containerd[1672]: 2026-01-21 00:58:40.729 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.729 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.730 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082 Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.735 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.739 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.130/26] block=192.168.4.128/26 handle="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.739 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.130/26] handle="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.739 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:40.761628 containerd[1672]: 2026-01-21 00:58:40.739 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.130/26] IPv6=[] ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" HandleID="k8s-pod-network.2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.741 [INFO][4320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b031c9f6-678f-4f41-8918-567e415496d1", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"coredns-674b8bbfcf-rrvtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c7eb86e24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.741 [INFO][4320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.130/32] ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.742 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9c7eb86e24 ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.747 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.747 [INFO][4320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b031c9f6-678f-4f41-8918-567e415496d1", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082", Pod:"coredns-674b8bbfcf-rrvtd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie9c7eb86e24", MAC:"7e:77:9d:9c:b7:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:40.761823 containerd[1672]: 2026-01-21 00:58:40.758 [INFO][4320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" Namespace="kube-system" Pod="coredns-674b8bbfcf-rrvtd" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--rrvtd-eth0" Jan 21 00:58:40.767242 kubelet[2881]: E0121 00:58:40.767064 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:58:40.798000 audit[4387]: NETFILTER_CFG table=filter:123 family=2 entries=42 op=nft_register_chain pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.798000 audit[4387]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffe3ffa68f0 a2=0 a3=7ffe3ffa68dc items=0 ppid=4194 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.798000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.800688 containerd[1672]: time="2026-01-21T00:58:40.800625343Z" level=info msg="connecting to shim 2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082" address="unix:///run/containerd/s/bed6a35c6f497408ecdf30607778f28b58b90886e0fa7b5a9f6cb9467292c348" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:40.809000 audit[4403]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:40.809000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf3ceb0b0 a2=0 a3=7ffcf3ceb09c items=0 ppid=3027 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:40.814000 audit[4403]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:40.814000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf3ceb0b0 a2=0 a3=0 items=0 ppid=3027 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:40.832467 systemd[1]: Started cri-containerd-2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082.scope - libcontainer container 2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082. Jan 21 00:58:40.846000 audit: BPF prog-id=211 op=LOAD Jan 21 00:58:40.847000 audit: BPF prog-id=212 op=LOAD Jan 21 00:58:40.847000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.847000 audit: BPF prog-id=212 op=UNLOAD Jan 21 00:58:40.847000 audit[4405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.848000 audit: BPF prog-id=213 op=LOAD Jan 21 00:58:40.848000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.848000 audit: BPF prog-id=214 op=LOAD Jan 21 00:58:40.848000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.848000 audit: BPF prog-id=214 op=UNLOAD Jan 21 00:58:40.848000 audit[4405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.848000 audit: BPF prog-id=213 op=UNLOAD Jan 21 00:58:40.848000 audit[4405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.849000 audit: BPF prog-id=215 op=LOAD Jan 21 00:58:40.849000 audit[4405]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4393 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263303331643566353531663566653762326239323166613936363961 Jan 21 00:58:40.856286 systemd-networkd[1556]: cali22d23e2ac32: Link UP Jan 21 00:58:40.856441 systemd-networkd[1556]: cali22d23e2ac32: Gained carrier Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.679 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0 csi-node-driver- calico-system 4683584a-9c9b-48ab-9b3d-c5a314d23b04 709 0 2026-01-21 00:58:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e csi-node-driver-gng52 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22d23e2ac32 [] [] }} ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.679 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.719 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" HandleID="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.719 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" HandleID="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"csi-node-driver-gng52", "timestamp":"2026-01-21 00:58:40.719007686 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.719 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.739 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.740 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.816 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.822 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.830 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.831 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.834 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.834 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.836 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0 Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.841 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.849 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.131/26] block=192.168.4.128/26 handle="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.849 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.131/26] handle="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.849 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:40.877948 containerd[1672]: 2026-01-21 00:58:40.849 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.131/26] IPv6=[] ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" HandleID="k8s-pod-network.aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.853 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4683584a-9c9b-48ab-9b3d-c5a314d23b04", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"csi-node-driver-gng52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d23e2ac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.853 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.131/32] ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.853 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22d23e2ac32 ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.856 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.857 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4683584a-9c9b-48ab-9b3d-c5a314d23b04", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0", Pod:"csi-node-driver-gng52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22d23e2ac32", MAC:"92:f7:1d:ed:5d:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:40.879524 containerd[1672]: 2026-01-21 00:58:40.876 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" Namespace="calico-system" Pod="csi-node-driver-gng52" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-csi--node--driver--gng52-eth0" Jan 21 00:58:40.895000 audit[4432]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:40.895000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffc13ad9470 a2=0 a3=7ffc13ad945c items=0 ppid=4194 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.895000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:40.905947 containerd[1672]: time="2026-01-21T00:58:40.905891993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvtd,Uid:b031c9f6-678f-4f41-8918-567e415496d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082\"" Jan 21 00:58:40.910943 containerd[1672]: time="2026-01-21T00:58:40.910882765Z" level=info msg="CreateContainer within sandbox \"2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 00:58:40.912135 containerd[1672]: time="2026-01-21T00:58:40.911711211Z" level=info msg="connecting to shim aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0" address="unix:///run/containerd/s/20c1dfb7133ade3baf61b06c34f80658e30b3d39263c5c3d35858b5a9adb303e" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:40.937467 systemd[1]: Started cri-containerd-aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0.scope - libcontainer container aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0. Jan 21 00:58:40.948000 audit: BPF prog-id=216 op=LOAD Jan 21 00:58:40.948000 audit: BPF prog-id=217 op=LOAD Jan 21 00:58:40.948000 audit[4459]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.948000 audit: BPF prog-id=217 op=UNLOAD Jan 21 00:58:40.948000 audit[4459]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.948000 audit: BPF prog-id=218 op=LOAD Jan 21 00:58:40.948000 audit[4459]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.949000 audit: BPF prog-id=219 op=LOAD Jan 21 00:58:40.949000 audit[4459]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.949000 audit: BPF prog-id=219 op=UNLOAD Jan 21 00:58:40.949000 audit[4459]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.949000 audit: BPF prog-id=218 op=UNLOAD Jan 21 00:58:40.949000 audit[4459]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.949000 audit: BPF prog-id=220 op=LOAD Jan 21 00:58:40.949000 audit[4459]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4446 pid=4459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:40.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161323032623266313630336438356533323734373232656563343236 Jan 21 00:58:40.970708 systemd-networkd[1556]: cali4716a09ccec: Link UP Jan 21 00:58:40.970837 systemd-networkd[1556]: cali4716a09ccec: Gained carrier Jan 21 00:58:40.991720 containerd[1672]: time="2026-01-21T00:58:40.991295934Z" level=info msg="Container c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:40.992288 containerd[1672]: time="2026-01-21T00:58:40.991617440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gng52,Uid:4683584a-9c9b-48ab-9b3d-c5a314d23b04,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa202b2f1603d85e3274722eec426dec209b3f52fc5aca2860af987728db14e0\"" Jan 21 00:58:40.996106 containerd[1672]: time="2026-01-21T00:58:40.996070773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.678 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0 calico-kube-controllers-fbd8b4d78- calico-system 0723cca5-619b-4e7c-893b-f737ac25ba0b 821 0 2026-01-21 00:58:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fbd8b4d78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e calico-kube-controllers-fbd8b4d78-76pnr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4716a09ccec [] [] }} ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.678 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.728 [INFO][4362] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" HandleID="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.729 [INFO][4362] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" HandleID="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"calico-kube-controllers-fbd8b4d78-76pnr", "timestamp":"2026-01-21 00:58:40.728920833 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.729 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.851 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.851 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.916 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.922 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.928 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.931 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.936 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.936 [INFO][4362] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.938 [INFO][4362] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9 Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.945 [INFO][4362] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.955 [INFO][4362] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.132/26] block=192.168.4.128/26 handle="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.955 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.132/26] handle="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.955 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:41.011630 containerd[1672]: 2026-01-21 00:58:40.955 [INFO][4362] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.132/26] IPv6=[] ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" HandleID="k8s-pod-network.6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:40.963 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0", GenerateName:"calico-kube-controllers-fbd8b4d78-", Namespace:"calico-system", SelfLink:"", UID:"0723cca5-619b-4e7c-893b-f737ac25ba0b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fbd8b4d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"calico-kube-controllers-fbd8b4d78-76pnr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4716a09ccec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:40.963 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.132/32] ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:40.963 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4716a09ccec ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:40.970 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:40.982 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0", GenerateName:"calico-kube-controllers-fbd8b4d78-", Namespace:"calico-system", SelfLink:"", UID:"0723cca5-619b-4e7c-893b-f737ac25ba0b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fbd8b4d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9", Pod:"calico-kube-controllers-fbd8b4d78-76pnr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4716a09ccec", MAC:"b6:f9:c8:2f:2a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:41.012861 containerd[1672]: 2026-01-21 00:58:41.006 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" Namespace="calico-system" Pod="calico-kube-controllers-fbd8b4d78-76pnr" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--kube--controllers--fbd8b4d78--76pnr-eth0" Jan 21 00:58:41.015189 containerd[1672]: time="2026-01-21T00:58:41.015139610Z" level=info msg="CreateContainer within sandbox \"2c031d5f551f5fe7b2b921fa9669a9fdea2bef59b1e2af49dce4909b4455a082\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939\"" Jan 21 00:58:41.017418 containerd[1672]: time="2026-01-21T00:58:41.016330559Z" level=info msg="StartContainer for \"c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939\"" Jan 21 00:58:41.017418 containerd[1672]: time="2026-01-21T00:58:41.016978765Z" level=info msg="connecting to shim c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939" address="unix:///run/containerd/s/bed6a35c6f497408ecdf30607778f28b58b90886e0fa7b5a9f6cb9467292c348" protocol=ttrpc version=3 Jan 21 00:58:41.022000 audit[4492]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:41.022000 audit[4492]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffc693ae390 a2=0 a3=7ffc693ae37c items=0 ppid=4194 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.022000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:41.035560 systemd[1]: Started cri-containerd-c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939.scope - libcontainer container c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939. Jan 21 00:58:41.045124 containerd[1672]: time="2026-01-21T00:58:41.045087525Z" level=info msg="connecting to shim 6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9" address="unix:///run/containerd/s/4552084f0d36d4107ec59989f218837dca043c9929adc8184b549646129235be" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:41.052000 audit: BPF prog-id=221 op=LOAD Jan 21 00:58:41.052000 audit: BPF prog-id=222 op=LOAD Jan 21 00:58:41.052000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.052000 audit: BPF prog-id=222 op=UNLOAD Jan 21 00:58:41.052000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.053000 audit: BPF prog-id=223 op=LOAD Jan 21 00:58:41.053000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.053000 audit: BPF prog-id=224 op=LOAD Jan 21 00:58:41.053000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.053000 audit: BPF prog-id=224 op=UNLOAD Jan 21 00:58:41.053000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.053000 audit: BPF prog-id=223 op=UNLOAD Jan 21 00:58:41.053000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.053000 audit: BPF prog-id=225 op=LOAD Jan 21 00:58:41.053000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4393 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353133336264346265623864646539326138396163333066653339 Jan 21 00:58:41.071835 systemd[1]: Started cri-containerd-6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9.scope - libcontainer container 6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9. Jan 21 00:58:41.078446 containerd[1672]: time="2026-01-21T00:58:41.078413979Z" level=info msg="StartContainer for \"c95133bd4beb8dde92a89ac30fe39693f90f19fe0c70ecfd270fca44e580e939\" returns successfully" Jan 21 00:58:41.087000 audit: BPF prog-id=226 op=LOAD Jan 21 00:58:41.088000 audit: BPF prog-id=227 op=LOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=227 op=UNLOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=228 op=LOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=229 op=LOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=229 op=UNLOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=228 op=UNLOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.088000 audit: BPF prog-id=230 op=LOAD Jan 21 00:58:41.088000 audit[4532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4518 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333535353631393964626339316362303562626535333133643838 Jan 21 00:58:41.129608 containerd[1672]: time="2026-01-21T00:58:41.129490690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fbd8b4d78-76pnr,Uid:0723cca5-619b-4e7c-893b-f737ac25ba0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6835556199dbc91cb05bbe5313d8875bdb1f787029aa2fcba9737fb3644b55e9\"" Jan 21 00:58:41.340260 containerd[1672]: time="2026-01-21T00:58:41.340207324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:41.341866 containerd[1672]: time="2026-01-21T00:58:41.341836265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 00:58:41.341932 containerd[1672]: time="2026-01-21T00:58:41.341910035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:41.342078 kubelet[2881]: E0121 00:58:41.342047 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:58:41.342141 kubelet[2881]: E0121 00:58:41.342090 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:58:41.342507 containerd[1672]: time="2026-01-21T00:58:41.342480147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 00:58:41.343722 kubelet[2881]: E0121 00:58:41.343676 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:41.598452 containerd[1672]: time="2026-01-21T00:58:41.598360462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-xwgzw,Uid:6a600d11-4238-41cf-86d6-99ea151288a7,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:58:41.626649 systemd-networkd[1556]: vxlan.calico: Gained IPv6LL Jan 21 00:58:41.678091 containerd[1672]: time="2026-01-21T00:58:41.677933800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:41.679978 containerd[1672]: time="2026-01-21T00:58:41.679779780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 00:58:41.679978 containerd[1672]: time="2026-01-21T00:58:41.679858783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:41.680426 kubelet[2881]: E0121 00:58:41.680393 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:58:41.680511 kubelet[2881]: E0121 00:58:41.680437 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:58:41.680833 kubelet[2881]: E0121 00:58:41.680797 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:41.681422 containerd[1672]: time="2026-01-21T00:58:41.681395352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 00:58:41.682147 kubelet[2881]: E0121 00:58:41.682127 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:58:41.712658 systemd-networkd[1556]: cali9561be023aa: Link UP Jan 21 00:58:41.714946 systemd-networkd[1556]: cali9561be023aa: Gained carrier Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.643 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0 calico-apiserver-5489cbd567- calico-apiserver 6a600d11-4238-41cf-86d6-99ea151288a7 823 0 2026-01-21 00:58:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5489cbd567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e calico-apiserver-5489cbd567-xwgzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9561be023aa [] [] }} ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.643 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.670 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" HandleID="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.670 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" HandleID="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"calico-apiserver-5489cbd567-xwgzw", "timestamp":"2026-01-21 00:58:41.670030518 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.670 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.670 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.670 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.676 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.682 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.687 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.689 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.691 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.691 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.692 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94 Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.695 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.706 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.133/26] block=192.168.4.128/26 handle="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.706 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.133/26] handle="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.706 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:41.729298 containerd[1672]: 2026-01-21 00:58:41.706 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.133/26] IPv6=[] ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" HandleID="k8s-pod-network.bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.709 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0", GenerateName:"calico-apiserver-5489cbd567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a600d11-4238-41cf-86d6-99ea151288a7", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5489cbd567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"calico-apiserver-5489cbd567-xwgzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9561be023aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.709 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.133/32] ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.709 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9561be023aa ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.712 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.712 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0", GenerateName:"calico-apiserver-5489cbd567-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a600d11-4238-41cf-86d6-99ea151288a7", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5489cbd567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94", Pod:"calico-apiserver-5489cbd567-xwgzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9561be023aa", MAC:"66:09:eb:dd:5b:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:41.730801 containerd[1672]: 2026-01-21 00:58:41.726 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-xwgzw" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--xwgzw-eth0" Jan 21 00:58:41.741000 audit[4600]: NETFILTER_CFG table=filter:128 family=2 entries=58 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:41.741000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=30568 a0=3 a1=7ffee9d45020 a2=0 a3=7ffee9d4500c items=0 ppid=4194 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.741000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:41.755674 containerd[1672]: time="2026-01-21T00:58:41.755519425Z" level=info msg="connecting to shim bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94" address="unix:///run/containerd/s/875cd699d3e88fb6df657c19d179d937024012f2daf761af9920cc854404ee7c" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:41.770493 kubelet[2881]: E0121 00:58:41.770464 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:58:41.787511 systemd[1]: Started cri-containerd-bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94.scope - libcontainer container bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94. Jan 21 00:58:41.798000 audit: BPF prog-id=231 op=LOAD Jan 21 00:58:41.800297 kernel: kauditd_printk_skb: 337 callbacks suppressed Jan 21 00:58:41.800369 kernel: audit: type=1334 audit(1768957121.798:685): prog-id=231 op=LOAD Jan 21 00:58:41.800000 audit: BPF prog-id=232 op=LOAD Jan 21 00:58:41.803273 kubelet[2881]: I0121 00:58:41.803067 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rrvtd" podStartSLOduration=38.803047953 podStartE2EDuration="38.803047953s" podCreationTimestamp="2026-01-21 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:58:41.802637848 +0000 UTC m=+45.046491128" watchObservedRunningTime="2026-01-21 00:58:41.803047953 +0000 UTC m=+45.046901262" Jan 21 00:58:41.804208 kernel: audit: type=1334 audit(1768957121.800:686): prog-id=232 op=LOAD Jan 21 00:58:41.804583 kernel: audit: type=1300 audit(1768957121.800:686): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.808779 kernel: audit: type=1327 audit(1768957121.800:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=232 op=UNLOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.813824 kernel: audit: type=1334 audit(1768957121.800:687): prog-id=232 op=UNLOAD Jan 21 00:58:41.813890 kernel: audit: type=1300 audit(1768957121.800:687): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=233 op=LOAD Jan 21 00:58:41.822206 kernel: audit: type=1327 audit(1768957121.800:687): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.822401 kernel: audit: type=1334 audit(1768957121.800:688): prog-id=233 op=LOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.824049 kernel: audit: type=1300 audit(1768957121.800:688): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.828182 kernel: audit: type=1327 audit(1768957121.800:688): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=234 op=LOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=234 op=UNLOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=233 op=UNLOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.800000 audit: BPF prog-id=235 op=LOAD Jan 21 00:58:41.800000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4609 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376565343766303363633061633963376232316465643334636431 Jan 21 00:58:41.834000 audit[4640]: NETFILTER_CFG table=filter:129 family=2 entries=17 op=nft_register_rule pid=4640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:41.834000 audit[4640]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6c3947a0 a2=0 a3=7ffd6c39478c items=0 ppid=3027 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:41.838000 audit[4640]: NETFILTER_CFG table=nat:130 family=2 entries=35 op=nft_register_chain pid=4640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:41.838000 audit[4640]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd6c3947a0 a2=0 a3=7ffd6c39478c items=0 ppid=3027 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:41.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:41.873633 containerd[1672]: time="2026-01-21T00:58:41.872283412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-xwgzw,Uid:6a600d11-4238-41cf-86d6-99ea151288a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bc7ee47f03cc0ac9c7b21ded34cd1ff2bd78e77f99ae22f7c459a8eb3d086d94\"" Jan 21 00:58:42.026468 containerd[1672]: time="2026-01-21T00:58:42.026401169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:42.028568 containerd[1672]: time="2026-01-21T00:58:42.028524968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 00:58:42.028730 containerd[1672]: time="2026-01-21T00:58:42.028605184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:42.028814 kubelet[2881]: E0121 00:58:42.028739 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:58:42.028868 kubelet[2881]: E0121 00:58:42.028817 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:58:42.029082 kubelet[2881]: E0121 00:58:42.029021 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:42.029564 containerd[1672]: time="2026-01-21T00:58:42.029282737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:58:42.030203 kubelet[2881]: E0121 00:58:42.030131 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:42.138399 systemd-networkd[1556]: cali4716a09ccec: Gained IPv6LL Jan 21 00:58:42.358391 containerd[1672]: time="2026-01-21T00:58:42.358343547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:42.360058 containerd[1672]: time="2026-01-21T00:58:42.360015762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:58:42.360475 containerd[1672]: time="2026-01-21T00:58:42.360104550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:42.360523 kubelet[2881]: E0121 00:58:42.360263 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:42.360523 kubelet[2881]: E0121 00:58:42.360306 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:42.360523 kubelet[2881]: E0121 00:58:42.360432 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t257m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:42.361765 kubelet[2881]: E0121 00:58:42.361733 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:58:42.457318 systemd-networkd[1556]: cali22d23e2ac32: Gained IPv6LL Jan 21 00:58:42.597926 containerd[1672]: time="2026-01-21T00:58:42.597851256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-6mfx7,Uid:b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37,Namespace:calico-apiserver,Attempt:0,}" Jan 21 00:58:42.731397 systemd-networkd[1556]: cali3fecbdaf5c6: Link UP Jan 21 00:58:42.732105 systemd-networkd[1556]: cali3fecbdaf5c6: Gained carrier Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.646 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0 calico-apiserver-5489cbd567- calico-apiserver b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37 824 0 2026-01-21 00:58:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5489cbd567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e calico-apiserver-5489cbd567-6mfx7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fecbdaf5c6 [] [] }} ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.646 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.680 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" HandleID="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.681 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" HandleID="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"calico-apiserver-5489cbd567-6mfx7", "timestamp":"2026-01-21 00:58:42.680807854 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.681 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.681 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.681 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.691 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.700 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.706 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.708 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.712 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.712 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.714 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.719 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.726 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.134/26] block=192.168.4.128/26 handle="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.726 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.134/26] handle="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.726 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:42.746557 containerd[1672]: 2026-01-21 00:58:42.726 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.134/26] IPv6=[] ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" HandleID="k8s-pod-network.1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.728 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0", GenerateName:"calico-apiserver-5489cbd567-", Namespace:"calico-apiserver", SelfLink:"", UID:"b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5489cbd567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"calico-apiserver-5489cbd567-6mfx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fecbdaf5c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.728 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.134/32] ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.728 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fecbdaf5c6 ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.732 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.732 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0", GenerateName:"calico-apiserver-5489cbd567-", Namespace:"calico-apiserver", SelfLink:"", UID:"b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5489cbd567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac", Pod:"calico-apiserver-5489cbd567-6mfx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fecbdaf5c6", MAC:"fe:77:99:f8:7c:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:42.748048 containerd[1672]: 2026-01-21 00:58:42.742 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" Namespace="calico-apiserver" Pod="calico-apiserver-5489cbd567-6mfx7" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-calico--apiserver--5489cbd567--6mfx7-eth0" Jan 21 00:58:42.777686 systemd-networkd[1556]: calie9c7eb86e24: Gained IPv6LL Jan 21 00:58:42.782350 kubelet[2881]: E0121 00:58:42.782261 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:58:42.783001 kubelet[2881]: E0121 00:58:42.782639 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:58:42.784306 kubelet[2881]: E0121 00:58:42.784112 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:58:42.789064 containerd[1672]: time="2026-01-21T00:58:42.789006632Z" level=info msg="connecting to shim 1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac" address="unix:///run/containerd/s/e4255121aa19278b6f931f097821741588ce0c6d68fbb9b0aeb3113a74e01da7" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:42.792000 audit[4681]: NETFILTER_CFG table=filter:131 family=2 entries=49 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:42.792000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffe9f153530 a2=0 a3=7ffe9f15351c items=0 ppid=4194 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:42.831447 systemd[1]: Started cri-containerd-1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac.scope - libcontainer container 1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac. Jan 21 00:58:42.833000 audit[4712]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=4712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:42.833000 audit[4712]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffbf0c3820 a2=0 a3=7fffbf0c380c items=0 ppid=3027 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.833000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:42.837000 audit[4712]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=4712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:42.837000 audit[4712]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffbf0c3820 a2=0 a3=7fffbf0c380c items=0 ppid=3027 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:42.856000 audit: BPF prog-id=236 op=LOAD Jan 21 00:58:42.857000 audit: BPF prog-id=237 op=LOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=237 op=UNLOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=238 op=LOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=239 op=LOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=239 op=UNLOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=238 op=UNLOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.857000 audit: BPF prog-id=240 op=LOAD Jan 21 00:58:42.857000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4687 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:42.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165303038383561363163613466666163306633343864666435633336 Jan 21 00:58:42.893055 containerd[1672]: time="2026-01-21T00:58:42.893015681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5489cbd567-6mfx7,Uid:b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1e00885a61ca4ffac0f348dfd5c36a1138908bde955da6539db3502e0d8cf4ac\"" Jan 21 00:58:42.894803 containerd[1672]: time="2026-01-21T00:58:42.894771764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:58:43.225640 systemd-networkd[1556]: cali9561be023aa: Gained IPv6LL Jan 21 00:58:43.240309 containerd[1672]: time="2026-01-21T00:58:43.240240497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:43.244023 containerd[1672]: time="2026-01-21T00:58:43.243922441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:58:43.244299 containerd[1672]: time="2026-01-21T00:58:43.244068322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:43.244355 kubelet[2881]: E0121 00:58:43.244313 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:43.244430 kubelet[2881]: E0121 00:58:43.244375 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:43.245212 kubelet[2881]: E0121 00:58:43.244549 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:43.245804 kubelet[2881]: E0121 00:58:43.245761 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:58:43.598032 containerd[1672]: time="2026-01-21T00:58:43.597803732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rrmh,Uid:5a22e677-c291-41f3-b041-3887656f799c,Namespace:kube-system,Attempt:0,}" Jan 21 00:58:43.692504 systemd-networkd[1556]: calic8d3d768565: Link UP Jan 21 00:58:43.693800 systemd-networkd[1556]: calic8d3d768565: Gained carrier Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.636 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0 coredns-674b8bbfcf- kube-system 5a22e677-c291-41f3-b041-3887656f799c 819 0 2026-01-21 00:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e coredns-674b8bbfcf-6rrmh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8d3d768565 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.636 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.660 [INFO][4739] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" HandleID="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.660 [INFO][4739] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" HandleID="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"coredns-674b8bbfcf-6rrmh", "timestamp":"2026-01-21 00:58:43.660446274 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.660 [INFO][4739] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.660 [INFO][4739] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.660 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.667 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.671 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.674 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.675 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.677 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.677 [INFO][4739] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.678 [INFO][4739] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064 Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.681 [INFO][4739] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.688 [INFO][4739] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.135/26] block=192.168.4.128/26 handle="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.688 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.135/26] handle="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.688 [INFO][4739] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:43.708456 containerd[1672]: 2026-01-21 00:58:43.688 [INFO][4739] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.135/26] IPv6=[] ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" HandleID="k8s-pod-network.2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.689 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a22e677-c291-41f3-b041-3887656f799c", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"coredns-674b8bbfcf-6rrmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d3d768565", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.690 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.135/32] ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.690 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d3d768565 ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.693 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.694 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a22e677-c291-41f3-b041-3887656f799c", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064", Pod:"coredns-674b8bbfcf-6rrmh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d3d768565", MAC:"7a:8b:de:fc:f3:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:43.709483 containerd[1672]: 2026-01-21 00:58:43.706 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rrmh" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-coredns--674b8bbfcf--6rrmh-eth0" Jan 21 00:58:43.723000 audit[4755]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:43.723000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffc3583e7e0 a2=0 a3=7ffc3583e7cc items=0 ppid=4194 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.723000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:43.739885 containerd[1672]: time="2026-01-21T00:58:43.739805195Z" level=info msg="connecting to shim 2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064" address="unix:///run/containerd/s/e4d18b35cf11dd7dc438912f0f52f5edc966365131d3f2049f07656977d94bda" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:43.764388 systemd[1]: Started cri-containerd-2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064.scope - libcontainer container 2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064. Jan 21 00:58:43.773000 audit: BPF prog-id=241 op=LOAD Jan 21 00:58:43.774000 audit: BPF prog-id=242 op=LOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=242 op=UNLOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=243 op=LOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=244 op=LOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=244 op=UNLOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=243 op=UNLOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.774000 audit: BPF prog-id=245 op=LOAD Jan 21 00:58:43.774000 audit[4777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4764 pid=4777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237343863363037366532653664343966393435373634313032316332 Jan 21 00:58:43.787271 kubelet[2881]: E0121 00:58:43.787227 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:58:43.787885 kubelet[2881]: E0121 00:58:43.787859 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:58:43.819000 audit[4799]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:43.819000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe528b17b0 a2=0 a3=7ffe528b179c items=0 ppid=3027 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:43.823000 audit[4799]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:43.823000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe528b17b0 a2=0 a3=7ffe528b179c items=0 ppid=3027 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:43.827223 containerd[1672]: time="2026-01-21T00:58:43.827128016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rrmh,Uid:5a22e677-c291-41f3-b041-3887656f799c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064\"" Jan 21 00:58:43.832351 containerd[1672]: time="2026-01-21T00:58:43.832324273Z" level=info msg="CreateContainer within sandbox \"2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 21 00:58:43.848252 containerd[1672]: time="2026-01-21T00:58:43.845778992Z" level=info msg="Container 6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:58:43.849253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837719627.mount: Deactivated successfully. Jan 21 00:58:43.858337 containerd[1672]: time="2026-01-21T00:58:43.858259604Z" level=info msg="CreateContainer within sandbox \"2748c6076e2e6d49f9457641021c25466e02e6d38695a68016f9de726e570064\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a\"" Jan 21 00:58:43.859319 containerd[1672]: time="2026-01-21T00:58:43.858968191Z" level=info msg="StartContainer for \"6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a\"" Jan 21 00:58:43.859890 containerd[1672]: time="2026-01-21T00:58:43.859846089Z" level=info msg="connecting to shim 6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a" address="unix:///run/containerd/s/e4d18b35cf11dd7dc438912f0f52f5edc966365131d3f2049f07656977d94bda" protocol=ttrpc version=3 Jan 21 00:58:43.888429 systemd[1]: Started cri-containerd-6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a.scope - libcontainer container 6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a. Jan 21 00:58:43.898000 audit: BPF prog-id=246 op=LOAD Jan 21 00:58:43.898000 audit: BPF prog-id=247 op=LOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.898000 audit: BPF prog-id=247 op=UNLOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.898000 audit: BPF prog-id=248 op=LOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.898000 audit: BPF prog-id=249 op=LOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.898000 audit: BPF prog-id=249 op=UNLOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.898000 audit: BPF prog-id=248 op=UNLOAD Jan 21 00:58:43.898000 audit[4805]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.899000 audit: BPF prog-id=250 op=LOAD Jan 21 00:58:43.899000 audit[4805]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4764 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:43.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663376236343833613831636262353438336537656230326266323932 Jan 21 00:58:43.920088 containerd[1672]: time="2026-01-21T00:58:43.919964011Z" level=info msg="StartContainer for \"6c7b6483a81cbb5483e7eb02bf292965bf6262016fce7d05f55a6cb35a00ea9a\" returns successfully" Jan 21 00:58:44.313545 systemd-networkd[1556]: cali3fecbdaf5c6: Gained IPv6LL Jan 21 00:58:44.598909 containerd[1672]: time="2026-01-21T00:58:44.598779297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4stx9,Uid:ab9475e3-845f-4249-abaa-5891387a4c3a,Namespace:calico-system,Attempt:0,}" Jan 21 00:58:44.710064 systemd-networkd[1556]: cali4ffabbc1655: Link UP Jan 21 00:58:44.710443 systemd-networkd[1556]: cali4ffabbc1655: Gained carrier Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.648 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0 goldmane-666569f655- calico-system ab9475e3-845f-4249-abaa-5891387a4c3a 825 0 2026-01-21 00:58:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-n-1ed4874c6e goldmane-666569f655-4stx9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4ffabbc1655 [] [] }} ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.649 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.672 [INFO][4848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" HandleID="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.672 [INFO][4848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" HandleID="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-1ed4874c6e", "pod":"goldmane-666569f655-4stx9", "timestamp":"2026-01-21 00:58:44.672074659 +0000 UTC"}, Hostname:"ci-4547-0-0-n-1ed4874c6e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.672 [INFO][4848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.672 [INFO][4848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.672 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-1ed4874c6e' Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.678 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.684 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.688 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.689 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.691 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.691 [INFO][4848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.692 [INFO][4848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.697 [INFO][4848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.705 [INFO][4848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.4.136/26] block=192.168.4.128/26 handle="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.705 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.4.136/26] handle="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" host="ci-4547-0-0-n-1ed4874c6e" Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.705 [INFO][4848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 21 00:58:44.729893 containerd[1672]: 2026-01-21 00:58:44.705 [INFO][4848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.4.136/26] IPv6=[] ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" HandleID="k8s-pod-network.f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Workload="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.706 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab9475e3-845f-4249-abaa-5891387a4c3a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"", Pod:"goldmane-666569f655-4stx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ffabbc1655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.707 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.136/32] ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.707 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ffabbc1655 ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.710 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.710 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab9475e3-845f-4249-abaa-5891387a4c3a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 21, 0, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-1ed4874c6e", ContainerID:"f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e", Pod:"goldmane-666569f655-4stx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ffabbc1655", MAC:"ce:f7:ae:1a:2e:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 21 00:58:44.730617 containerd[1672]: 2026-01-21 00:58:44.724 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" Namespace="calico-system" Pod="goldmane-666569f655-4stx9" WorkloadEndpoint="ci--4547--0--0--n--1ed4874c6e-k8s-goldmane--666569f655--4stx9-eth0" Jan 21 00:58:44.745000 audit[4865]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=4865 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 21 00:58:44.745000 audit[4865]: SYSCALL arch=c000003e syscall=46 success=yes exit=26388 a0=3 a1=7ffc5540c1b0 a2=0 a3=7ffc5540c19c items=0 ppid=4194 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.745000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 21 00:58:44.767312 containerd[1672]: time="2026-01-21T00:58:44.767276372Z" level=info msg="connecting to shim f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e" address="unix:///run/containerd/s/cfdebd7cf764108b28cec97ae46c21f758632c439d247c18bf301989c04f26dc" namespace=k8s.io protocol=ttrpc version=3 Jan 21 00:58:44.788405 systemd[1]: Started cri-containerd-f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e.scope - libcontainer container f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e. Jan 21 00:58:44.790570 kubelet[2881]: E0121 00:58:44.790533 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:58:44.803000 audit: BPF prog-id=251 op=LOAD Jan 21 00:58:44.804000 audit: BPF prog-id=252 op=LOAD Jan 21 00:58:44.804000 audit[4887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.804000 audit: BPF prog-id=252 op=UNLOAD Jan 21 00:58:44.804000 audit[4887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.806489 kubelet[2881]: I0121 00:58:44.806320 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6rrmh" podStartSLOduration=41.806306351 podStartE2EDuration="41.806306351s" podCreationTimestamp="2026-01-21 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 00:58:44.80586589 +0000 UTC m=+48.049719151" watchObservedRunningTime="2026-01-21 00:58:44.806306351 +0000 UTC m=+48.050159634" Jan 21 00:58:44.805000 audit: BPF prog-id=253 op=LOAD Jan 21 00:58:44.805000 audit[4887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.805000 audit: BPF prog-id=254 op=LOAD Jan 21 00:58:44.805000 audit[4887]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.805000 audit: BPF prog-id=254 op=UNLOAD Jan 21 00:58:44.805000 audit[4887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.805000 audit: BPF prog-id=253 op=UNLOAD Jan 21 00:58:44.805000 audit[4887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.805000 audit: BPF prog-id=255 op=LOAD Jan 21 00:58:44.805000 audit[4887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4875 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638386236626337393630666164366532666663323638646137333433 Jan 21 00:58:44.821000 audit[4907]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:44.821000 audit[4907]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc96d96590 a2=0 a3=7ffc96d9657c items=0 ppid=3027 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:44.825000 audit[4907]: NETFILTER_CFG table=nat:139 family=2 entries=44 op=nft_register_rule pid=4907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:44.825000 audit[4907]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc96d96590 a2=0 a3=7ffc96d9657c items=0 ppid=3027 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:44.875583 containerd[1672]: time="2026-01-21T00:58:44.875415556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4stx9,Uid:ab9475e3-845f-4249-abaa-5891387a4c3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f88b6bc7960fad6e2ffc268da73437dbe7e1446c4a66ed70c5e322b642dfb43e\"" Jan 21 00:58:44.877438 containerd[1672]: time="2026-01-21T00:58:44.877394757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 00:58:44.966000 audit[4915]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:44.966000 audit[4915]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1b526fc0 a2=0 a3=7ffe1b526fac items=0 ppid=3027 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:44.988000 audit[4915]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:44.988000 audit[4915]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe1b526fc0 a2=0 a3=7ffe1b526fac items=0 ppid=3027 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:44.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:45.410360 containerd[1672]: time="2026-01-21T00:58:45.410198015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:45.412459 containerd[1672]: time="2026-01-21T00:58:45.412294398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 00:58:45.412459 containerd[1672]: time="2026-01-21T00:58:45.412391326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:45.412750 kubelet[2881]: E0121 00:58:45.412531 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:58:45.412750 kubelet[2881]: E0121 00:58:45.412583 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:58:45.412750 kubelet[2881]: E0121 00:58:45.412719 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2jpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:45.413954 kubelet[2881]: E0121 00:58:45.413914 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:58:45.593405 systemd-networkd[1556]: calic8d3d768565: Gained IPv6LL Jan 21 00:58:45.791453 kubelet[2881]: E0121 00:58:45.791420 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:58:46.013000 audit[4924]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:46.013000 audit[4924]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd473c2c40 a2=0 a3=7ffd473c2c2c items=0 ppid=3027 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:46.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:46.017000 audit[4924]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:58:46.017000 audit[4924]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd473c2c40 a2=0 a3=7ffd473c2c2c items=0 ppid=3027 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:58:46.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:58:46.105836 systemd-networkd[1556]: cali4ffabbc1655: Gained IPv6LL Jan 21 00:58:46.793477 kubelet[2881]: E0121 00:58:46.793359 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:58:54.601703 containerd[1672]: time="2026-01-21T00:58:54.601646360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 00:58:54.952986 containerd[1672]: time="2026-01-21T00:58:54.952945640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:54.954678 containerd[1672]: time="2026-01-21T00:58:54.954604171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 00:58:54.954773 containerd[1672]: time="2026-01-21T00:58:54.954660835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:54.955019 kubelet[2881]: E0121 00:58:54.954933 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:58:54.955419 kubelet[2881]: E0121 00:58:54.955051 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:58:54.955419 kubelet[2881]: E0121 00:58:54.955286 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:adf2c7942adb4caa8cbe3abb5e35591e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:54.957821 containerd[1672]: time="2026-01-21T00:58:54.957797203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 00:58:55.301994 containerd[1672]: time="2026-01-21T00:58:55.301761578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:55.304187 containerd[1672]: time="2026-01-21T00:58:55.304106532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 00:58:55.305177 containerd[1672]: time="2026-01-21T00:58:55.304118491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:55.305257 kubelet[2881]: E0121 00:58:55.304653 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:58:55.305257 kubelet[2881]: E0121 00:58:55.304708 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:58:55.305257 kubelet[2881]: E0121 00:58:55.304850 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:55.306539 kubelet[2881]: E0121 00:58:55.306372 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:58:55.605268 containerd[1672]: time="2026-01-21T00:58:55.605135073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 00:58:55.931394 containerd[1672]: time="2026-01-21T00:58:55.931221649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:55.934453 containerd[1672]: time="2026-01-21T00:58:55.934271533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 00:58:55.934453 containerd[1672]: time="2026-01-21T00:58:55.934355565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:55.934756 kubelet[2881]: E0121 00:58:55.934639 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:58:55.934756 kubelet[2881]: E0121 00:58:55.934719 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:58:55.935039 kubelet[2881]: E0121 00:58:55.934925 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:55.936984 kubelet[2881]: E0121 00:58:55.936908 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:58:57.600461 containerd[1672]: time="2026-01-21T00:58:57.600425294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:58:57.942333 containerd[1672]: time="2026-01-21T00:58:57.942285735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:57.943917 containerd[1672]: time="2026-01-21T00:58:57.943844006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:58:57.943917 containerd[1672]: time="2026-01-21T00:58:57.943886089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:57.944297 kubelet[2881]: E0121 00:58:57.944135 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:57.944297 kubelet[2881]: E0121 00:58:57.944186 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:57.944602 containerd[1672]: time="2026-01-21T00:58:57.944475954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 00:58:57.945093 kubelet[2881]: E0121 00:58:57.944819 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t257m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:57.946240 kubelet[2881]: E0121 00:58:57.946213 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:58:58.277639 containerd[1672]: time="2026-01-21T00:58:58.277494103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:58.279113 containerd[1672]: time="2026-01-21T00:58:58.279058644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 00:58:58.279480 containerd[1672]: time="2026-01-21T00:58:58.279141271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:58.279524 kubelet[2881]: E0121 00:58:58.279273 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:58:58.279524 kubelet[2881]: E0121 00:58:58.279313 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:58:58.279579 containerd[1672]: time="2026-01-21T00:58:58.279535513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:58:58.280189 kubelet[2881]: E0121 00:58:58.279799 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:58.606826 containerd[1672]: time="2026-01-21T00:58:58.606611682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:58.609823 containerd[1672]: time="2026-01-21T00:58:58.609616120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:58:58.609823 containerd[1672]: time="2026-01-21T00:58:58.609783944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:58.610496 kubelet[2881]: E0121 00:58:58.610271 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:58.610496 kubelet[2881]: E0121 00:58:58.610338 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:58:58.611262 kubelet[2881]: E0121 00:58:58.610577 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:58.611427 containerd[1672]: time="2026-01-21T00:58:58.610802622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 00:58:58.613202 kubelet[2881]: E0121 00:58:58.613130 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:58:58.949190 containerd[1672]: time="2026-01-21T00:58:58.949021114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:58:58.951355 containerd[1672]: time="2026-01-21T00:58:58.951238897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 00:58:58.951355 containerd[1672]: time="2026-01-21T00:58:58.951301942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 00:58:58.951564 kubelet[2881]: E0121 00:58:58.951472 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:58:58.951564 kubelet[2881]: E0121 00:58:58.951515 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:58:58.951889 kubelet[2881]: E0121 00:58:58.951628 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 00:58:58.953115 kubelet[2881]: E0121 00:58:58.953070 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:59:00.599329 containerd[1672]: time="2026-01-21T00:59:00.599264956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 00:59:00.928378 containerd[1672]: time="2026-01-21T00:59:00.928328622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:00.930010 containerd[1672]: time="2026-01-21T00:59:00.929971061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 00:59:00.930194 containerd[1672]: time="2026-01-21T00:59:00.930051987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:00.930224 kubelet[2881]: E0121 00:59:00.930195 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:00.930474 kubelet[2881]: E0121 00:59:00.930236 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:00.930474 kubelet[2881]: E0121 00:59:00.930365 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2jpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:00.932369 kubelet[2881]: E0121 00:59:00.932270 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:59:05.601258 kubelet[2881]: E0121 00:59:05.600850 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:59:06.598138 kubelet[2881]: E0121 00:59:06.598103 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:59:09.599527 kubelet[2881]: E0121 00:59:09.599482 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:59:09.601367 kubelet[2881]: E0121 00:59:09.601340 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:59:11.601322 kubelet[2881]: E0121 00:59:11.601064 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:59:12.599453 kubelet[2881]: E0121 00:59:12.599375 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:59:17.600221 containerd[1672]: time="2026-01-21T00:59:17.599712968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 00:59:17.950011 containerd[1672]: time="2026-01-21T00:59:17.949962588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:17.951906 containerd[1672]: time="2026-01-21T00:59:17.951828043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 00:59:17.952002 containerd[1672]: time="2026-01-21T00:59:17.951861358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:17.952232 kubelet[2881]: E0121 00:59:17.952147 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:59:17.952486 kubelet[2881]: E0121 00:59:17.952248 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 00:59:17.953163 kubelet[2881]: E0121 00:59:17.953116 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:adf2c7942adb4caa8cbe3abb5e35591e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:17.955303 containerd[1672]: time="2026-01-21T00:59:17.955278873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 00:59:18.298974 containerd[1672]: time="2026-01-21T00:59:18.298629150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:18.301996 containerd[1672]: time="2026-01-21T00:59:18.301924171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 00:59:18.302131 containerd[1672]: time="2026-01-21T00:59:18.302017326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:18.302340 kubelet[2881]: E0121 00:59:18.302287 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:59:18.302340 kubelet[2881]: E0121 00:59:18.302333 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 00:59:18.302491 kubelet[2881]: E0121 00:59:18.302448 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:18.303748 kubelet[2881]: E0121 00:59:18.303682 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:59:20.599583 containerd[1672]: time="2026-01-21T00:59:20.599513369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 00:59:20.939187 containerd[1672]: time="2026-01-21T00:59:20.938997374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:20.940623 containerd[1672]: time="2026-01-21T00:59:20.940528174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 00:59:20.940623 containerd[1672]: time="2026-01-21T00:59:20.940541353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:20.940895 kubelet[2881]: E0121 00:59:20.940865 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:59:20.941331 kubelet[2881]: E0121 00:59:20.941209 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 00:59:20.942170 kubelet[2881]: E0121 00:59:20.941417 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:20.942280 containerd[1672]: time="2026-01-21T00:59:20.941601507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 00:59:21.281323 containerd[1672]: time="2026-01-21T00:59:21.281212570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:21.282919 containerd[1672]: time="2026-01-21T00:59:21.282816052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 00:59:21.282919 containerd[1672]: time="2026-01-21T00:59:21.282891642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:21.283217 kubelet[2881]: E0121 00:59:21.283126 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:59:21.283217 kubelet[2881]: E0121 00:59:21.283194 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 00:59:21.283429 kubelet[2881]: E0121 00:59:21.283374 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:21.284288 containerd[1672]: time="2026-01-21T00:59:21.284233168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 00:59:21.285339 kubelet[2881]: E0121 00:59:21.285294 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:59:21.615250 containerd[1672]: time="2026-01-21T00:59:21.614976512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:21.618138 containerd[1672]: time="2026-01-21T00:59:21.618083901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 00:59:21.618728 containerd[1672]: time="2026-01-21T00:59:21.618674753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:21.618917 kubelet[2881]: E0121 00:59:21.618857 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:59:21.618917 kubelet[2881]: E0121 00:59:21.618903 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 00:59:21.619125 kubelet[2881]: E0121 00:59:21.619015 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:21.620458 kubelet[2881]: E0121 00:59:21.620422 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:59:23.599560 containerd[1672]: time="2026-01-21T00:59:23.599521816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:59:23.959919 containerd[1672]: time="2026-01-21T00:59:23.959656654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:23.961976 containerd[1672]: time="2026-01-21T00:59:23.961879261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:59:23.962142 containerd[1672]: time="2026-01-21T00:59:23.961932008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:23.962391 kubelet[2881]: E0121 00:59:23.962349 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:23.963833 kubelet[2881]: E0121 00:59:23.962409 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:23.963833 kubelet[2881]: E0121 00:59:23.962581 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:23.963833 kubelet[2881]: E0121 00:59:23.963766 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:59:25.601418 containerd[1672]: time="2026-01-21T00:59:25.601384283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 00:59:25.962452 containerd[1672]: time="2026-01-21T00:59:25.962287640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:25.964355 containerd[1672]: time="2026-01-21T00:59:25.964264452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 00:59:25.964593 containerd[1672]: time="2026-01-21T00:59:25.964490897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:25.966793 kubelet[2881]: E0121 00:59:25.966296 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:25.966793 kubelet[2881]: E0121 00:59:25.966346 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 00:59:25.966793 kubelet[2881]: E0121 00:59:25.966465 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2jpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:25.969932 kubelet[2881]: E0121 00:59:25.969849 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:59:26.598998 containerd[1672]: time="2026-01-21T00:59:26.598785297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 00:59:27.137725 containerd[1672]: time="2026-01-21T00:59:27.137563725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 00:59:27.141384 containerd[1672]: time="2026-01-21T00:59:27.141257593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 00:59:27.141384 containerd[1672]: time="2026-01-21T00:59:27.141353142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 00:59:27.141581 kubelet[2881]: E0121 00:59:27.141522 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:27.142199 kubelet[2881]: E0121 00:59:27.141592 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 00:59:27.142199 kubelet[2881]: E0121 00:59:27.141761 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t257m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 00:59:27.144447 kubelet[2881]: E0121 00:59:27.144406 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:59:33.601419 kubelet[2881]: E0121 00:59:33.600506 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:59:33.601925 kubelet[2881]: E0121 00:59:33.601615 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:59:35.600500 kubelet[2881]: E0121 00:59:35.600455 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:59:37.601127 kubelet[2881]: E0121 00:59:37.601082 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:59:37.602051 kubelet[2881]: E0121 00:59:37.601796 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:59:40.599881 kubelet[2881]: E0121 00:59:40.599829 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:59:46.601519 kubelet[2881]: E0121 00:59:46.601480 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 00:59:47.601555 kubelet[2881]: E0121 00:59:47.601505 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:59:48.600192 kubelet[2881]: E0121 00:59:48.600145 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 00:59:48.600936 kubelet[2881]: E0121 00:59:48.600909 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 00:59:51.599769 kubelet[2881]: E0121 00:59:51.599727 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 00:59:51.600164 kubelet[2881]: E0121 00:59:51.600019 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 00:59:59.598805 kubelet[2881]: E0121 00:59:59.598696 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 00:59:59.599869 kubelet[2881]: E0121 00:59:59.599663 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:00:01.601420 containerd[1672]: time="2026-01-21T01:00:01.601380001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:00:03.017955 containerd[1672]: time="2026-01-21T01:00:03.017911049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:03.023430 containerd[1672]: time="2026-01-21T01:00:03.023381097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:00:03.023537 containerd[1672]: time="2026-01-21T01:00:03.023478091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:03.023664 kubelet[2881]: E0121 01:00:03.023631 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:03.023912 kubelet[2881]: E0121 01:00:03.023678 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:00:03.023912 kubelet[2881]: E0121 01:00:03.023860 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:adf2c7942adb4caa8cbe3abb5e35591e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:03.024939 containerd[1672]: time="2026-01-21T01:00:03.024920917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:00:03.963283 containerd[1672]: time="2026-01-21T01:00:03.963244998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:03.964810 containerd[1672]: time="2026-01-21T01:00:03.964780943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:00:03.965109 containerd[1672]: time="2026-01-21T01:00:03.964850652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:03.965172 kubelet[2881]: E0121 01:00:03.964959 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:03.965172 kubelet[2881]: E0121 01:00:03.964996 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:00:03.965328 kubelet[2881]: E0121 01:00:03.965249 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:03.965523 containerd[1672]: time="2026-01-21T01:00:03.965508518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:00:06.412976 containerd[1672]: time="2026-01-21T01:00:06.412933955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:06.415061 containerd[1672]: time="2026-01-21T01:00:06.415002892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:06.415216 containerd[1672]: time="2026-01-21T01:00:06.415017126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:00:06.415562 kubelet[2881]: E0121 01:00:06.415405 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:06.415562 kubelet[2881]: E0121 01:00:06.415447 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:00:06.415821 containerd[1672]: time="2026-01-21T01:00:06.415699726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:00:06.416428 kubelet[2881]: E0121 01:00:06.416298 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:06.417538 kubelet[2881]: E0121 01:00:06.417514 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:00:06.605886 kubelet[2881]: E0121 01:00:06.605843 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:00:07.412695 containerd[1672]: time="2026-01-21T01:00:07.412547220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:07.414166 containerd[1672]: time="2026-01-21T01:00:07.414068051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:00:07.414166 containerd[1672]: time="2026-01-21T01:00:07.414142513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:07.415359 kubelet[2881]: E0121 01:00:07.415312 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:07.415435 kubelet[2881]: E0121 01:00:07.415370 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:00:07.415753 containerd[1672]: time="2026-01-21T01:00:07.415594947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:07.415862 kubelet[2881]: E0121 01:00:07.415831 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:07.417083 kubelet[2881]: E0121 01:00:07.417018 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:00:08.704264 containerd[1672]: time="2026-01-21T01:00:08.704126957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:08.706132 containerd[1672]: time="2026-01-21T01:00:08.706083503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:08.706330 containerd[1672]: time="2026-01-21T01:00:08.706180503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:08.706498 kubelet[2881]: E0121 01:00:08.706429 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:08.706498 kubelet[2881]: E0121 01:00:08.706484 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:08.707201 kubelet[2881]: E0121 01:00:08.706850 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:08.708496 kubelet[2881]: E0121 01:00:08.708466 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:00:13.602304 containerd[1672]: time="2026-01-21T01:00:13.602074153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:00:14.557937 systemd[1]: Started sshd@7-10.0.0.94:22-4.153.228.146:55378.service - OpenSSH per-connection server daemon (4.153.228.146:55378). Jan 21 01:00:14.564021 kernel: kauditd_printk_skb: 145 callbacks suppressed Jan 21 01:00:14.564078 kernel: audit: type=1130 audit(1768957214.557:740): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.94:22-4.153.228.146:55378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:14.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.94:22-4.153.228.146:55378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:15.008181 containerd[1672]: time="2026-01-21T01:00:15.007996526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:15.011540 containerd[1672]: time="2026-01-21T01:00:15.011411062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:00:15.011540 containerd[1672]: time="2026-01-21T01:00:15.011416847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:15.011699 kubelet[2881]: E0121 01:00:15.011637 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:15.011699 kubelet[2881]: E0121 01:00:15.011684 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:00:15.012162 kubelet[2881]: E0121 01:00:15.011870 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2jpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:15.013111 containerd[1672]: time="2026-01-21T01:00:15.012510945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:00:15.013988 kubelet[2881]: E0121 01:00:15.013962 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:00:15.150000 audit[5064]: USER_ACCT pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.154361 sshd[5064]: Accepted publickey for core from 4.153.228.146 port 55378 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:15.154000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.156296 kernel: audit: type=1101 audit(1768957215.150:741): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.156333 kernel: audit: type=1103 audit(1768957215.154:742): pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.156600 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:15.159774 kernel: audit: type=1006 audit(1768957215.154:743): pid=5064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 21 01:00:15.154000 audit[5064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc57854f70 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:15.166193 kernel: audit: type=1300 audit(1768957215.154:743): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc57854f70 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:15.166439 kernel: audit: type=1327 audit(1768957215.154:743): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:15.154000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:15.170628 systemd-logind[1652]: New session 9 of user core. Jan 21 01:00:15.181434 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 21 01:00:15.185000 audit[5064]: USER_START pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.192234 kernel: audit: type=1105 audit(1768957215.185:744): pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.192000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.197177 kernel: audit: type=1103 audit(1768957215.192:745): pid=5068 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.541183 sshd[5068]: Connection closed by 4.153.228.146 port 55378 Jan 21 01:00:15.543313 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:15.543000 audit[5064]: USER_END pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.551350 kernel: audit: type=1106 audit(1768957215.543:746): pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.551150 systemd[1]: sshd@7-10.0.0.94:22-4.153.228.146:55378.service: Deactivated successfully. Jan 21 01:00:15.552979 systemd[1]: session-9.scope: Deactivated successfully. Jan 21 01:00:15.544000 audit[5064]: CRED_DISP pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.557246 kernel: audit: type=1104 audit(1768957215.544:747): pid=5064 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:15.557343 systemd-logind[1652]: Session 9 logged out. Waiting for processes to exit. Jan 21 01:00:15.558475 systemd-logind[1652]: Removed session 9. Jan 21 01:00:15.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.94:22-4.153.228.146:55378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:15.790041 containerd[1672]: time="2026-01-21T01:00:15.789990662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:15.792092 containerd[1672]: time="2026-01-21T01:00:15.791911308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:00:15.792092 containerd[1672]: time="2026-01-21T01:00:15.791962226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:15.792580 kubelet[2881]: E0121 01:00:15.792210 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:15.792580 kubelet[2881]: E0121 01:00:15.792285 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:00:15.793290 kubelet[2881]: E0121 01:00:15.793227 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:15.794473 kubelet[2881]: E0121 01:00:15.794423 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:00:17.602994 containerd[1672]: time="2026-01-21T01:00:17.602779187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:00:17.603911 kubelet[2881]: E0121 01:00:17.603295 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:00:18.348178 containerd[1672]: time="2026-01-21T01:00:18.348118234Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:00:18.350403 containerd[1672]: time="2026-01-21T01:00:18.350346296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:00:18.350461 containerd[1672]: time="2026-01-21T01:00:18.350435920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:00:18.350861 kubelet[2881]: E0121 01:00:18.350630 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:18.350861 kubelet[2881]: E0121 01:00:18.350680 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:00:18.350861 kubelet[2881]: E0121 01:00:18.350816 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t257m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-xwgzw_calico-apiserver(6a600d11-4238-41cf-86d6-99ea151288a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:00:18.352058 kubelet[2881]: E0121 01:00:18.352015 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:00:20.652325 systemd[1]: Started sshd@8-10.0.0.94:22-4.153.228.146:55390.service - OpenSSH per-connection server daemon (4.153.228.146:55390). Jan 21 01:00:20.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-4.153.228.146:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:20.654365 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:00:20.654467 kernel: audit: type=1130 audit(1768957220.652:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-4.153.228.146:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:21.219806 sshd[5094]: Accepted publickey for core from 4.153.228.146 port 55390 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:21.219000 audit[5094]: USER_ACCT pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.225280 kernel: audit: type=1101 audit(1768957221.219:750): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.225457 kernel: audit: type=1103 audit(1768957221.224:751): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.224000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.226431 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:21.232345 kernel: audit: type=1006 audit(1768957221.224:752): pid=5094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 21 01:00:21.232430 kernel: audit: type=1300 audit(1768957221.224:752): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12848950 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:21.224000 audit[5094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12848950 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:21.224000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:21.237863 kernel: audit: type=1327 audit(1768957221.224:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:21.240607 systemd-logind[1652]: New session 10 of user core. Jan 21 01:00:21.247507 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 21 01:00:21.251000 audit[5094]: USER_START pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.257273 kernel: audit: type=1105 audit(1768957221.251:753): pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.254000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.266904 kernel: audit: type=1103 audit(1768957221.254:754): pid=5098 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.603874 kubelet[2881]: E0121 01:00:21.602423 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:00:21.607055 sshd[5098]: Connection closed by 4.153.228.146 port 55390 Jan 21 01:00:21.607491 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:21.608000 audit[5094]: USER_END pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.611774 systemd[1]: sshd@8-10.0.0.94:22-4.153.228.146:55390.service: Deactivated successfully. Jan 21 01:00:21.613205 systemd-logind[1652]: Session 10 logged out. Waiting for processes to exit. Jan 21 01:00:21.614184 kernel: audit: type=1106 audit(1768957221.608:755): pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.615421 systemd[1]: session-10.scope: Deactivated successfully. Jan 21 01:00:21.608000 audit[5094]: CRED_DISP pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.620546 systemd-logind[1652]: Removed session 10. Jan 21 01:00:21.622196 kernel: audit: type=1104 audit(1768957221.608:756): pid=5094 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:21.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-4.153.228.146:55390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:22.599692 kubelet[2881]: E0121 01:00:22.599336 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:00:26.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.94:22-4.153.228.146:33488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:26.715113 systemd[1]: Started sshd@9-10.0.0.94:22-4.153.228.146:33488.service - OpenSSH per-connection server daemon (4.153.228.146:33488). Jan 21 01:00:26.716444 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:00:26.716506 kernel: audit: type=1130 audit(1768957226.715:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.94:22-4.153.228.146:33488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.266000 audit[5112]: USER_ACCT pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.268495 sshd[5112]: Accepted publickey for core from 4.153.228.146 port 33488 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:27.271185 kernel: audit: type=1101 audit(1768957227.266:759): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.271000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.273120 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:27.276235 kernel: audit: type=1103 audit(1768957227.271:760): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.271000 audit[5112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0aec3fb0 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:27.283173 kernel: audit: type=1006 audit(1768957227.271:761): pid=5112 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 21 01:00:27.283242 kernel: audit: type=1300 audit(1768957227.271:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0aec3fb0 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:27.283899 systemd-logind[1652]: New session 11 of user core. Jan 21 01:00:27.284405 kernel: audit: type=1327 audit(1768957227.271:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:27.271000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:27.290418 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 21 01:00:27.296000 audit[5112]: USER_START pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.302173 kernel: audit: type=1105 audit(1768957227.296:762): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.302000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.307193 kernel: audit: type=1103 audit(1768957227.302:763): pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.602798 kubelet[2881]: E0121 01:00:27.601775 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:00:27.603719 kubelet[2881]: E0121 01:00:27.603313 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:00:27.633128 sshd[5116]: Connection closed by 4.153.228.146 port 33488 Jan 21 01:00:27.632722 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:27.636000 audit[5112]: USER_END pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.642252 kernel: audit: type=1106 audit(1768957227.636:764): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.641561 systemd[1]: sshd@9-10.0.0.94:22-4.153.228.146:33488.service: Deactivated successfully. Jan 21 01:00:27.636000 audit[5112]: CRED_DISP pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.646315 systemd[1]: session-11.scope: Deactivated successfully. Jan 21 01:00:27.649119 kernel: audit: type=1104 audit(1768957227.636:765): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:27.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.94:22-4.153.228.146:33488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:27.650392 systemd-logind[1652]: Session 11 logged out. Waiting for processes to exit. Jan 21 01:00:27.651803 systemd-logind[1652]: Removed session 11. Jan 21 01:00:28.600240 kubelet[2881]: E0121 01:00:28.600137 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:00:30.599600 kubelet[2881]: E0121 01:00:30.599324 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:00:32.741762 systemd[1]: Started sshd@10-10.0.0.94:22-4.153.228.146:33498.service - OpenSSH per-connection server daemon (4.153.228.146:33498). Jan 21 01:00:32.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.94:22-4.153.228.146:33498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:32.743413 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:00:32.743456 kernel: audit: type=1130 audit(1768957232.740:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.94:22-4.153.228.146:33498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.262000 audit[5128]: USER_ACCT pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.266321 sshd[5128]: Accepted publickey for core from 4.153.228.146 port 33498 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:33.269182 kernel: audit: type=1101 audit(1768957233.262:768): pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.270565 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:33.268000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.276356 kernel: audit: type=1103 audit(1768957233.268:769): pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.284087 systemd-logind[1652]: New session 12 of user core. Jan 21 01:00:33.284371 kernel: audit: type=1006 audit(1768957233.268:770): pid=5128 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 21 01:00:33.268000 audit[5128]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9f2dbdc0 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:33.292192 kernel: audit: type=1300 audit(1768957233.268:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9f2dbdc0 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:33.292280 kernel: audit: type=1327 audit(1768957233.268:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:33.268000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:33.292584 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 21 01:00:33.294000 audit[5128]: USER_START pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.300000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.303407 kernel: audit: type=1105 audit(1768957233.294:771): pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.303462 kernel: audit: type=1103 audit(1768957233.300:772): pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.600820 kubelet[2881]: E0121 01:00:33.600370 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:00:33.621072 sshd[5132]: Connection closed by 4.153.228.146 port 33498 Jan 21 01:00:33.621814 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:33.622000 audit[5128]: USER_END pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.630171 kernel: audit: type=1106 audit(1768957233.622:773): pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.631125 systemd[1]: sshd@10-10.0.0.94:22-4.153.228.146:33498.service: Deactivated successfully. Jan 21 01:00:33.634422 systemd[1]: session-12.scope: Deactivated successfully. Jan 21 01:00:33.637390 systemd-logind[1652]: Session 12 logged out. Waiting for processes to exit. Jan 21 01:00:33.638239 systemd-logind[1652]: Removed session 12. Jan 21 01:00:33.622000 audit[5128]: CRED_DISP pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.643180 kernel: audit: type=1104 audit(1768957233.622:774): pid=5128 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:33.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.94:22-4.153.228.146:33498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:33.730907 systemd[1]: Started sshd@11-10.0.0.94:22-4.153.228.146:33500.service - OpenSSH per-connection server daemon (4.153.228.146:33500). Jan 21 01:00:33.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.94:22-4.153.228.146:33500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.290000 audit[5147]: USER_ACCT pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.292224 sshd[5147]: Accepted publickey for core from 4.153.228.146 port 33500 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:34.292000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.292000 audit[5147]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc01aa73d0 a2=3 a3=0 items=0 ppid=1 pid=5147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:34.292000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:34.294622 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:34.303771 systemd-logind[1652]: New session 13 of user core. Jan 21 01:00:34.311715 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 21 01:00:34.313000 audit[5147]: USER_START pid=5147 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.315000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.599265 kubelet[2881]: E0121 01:00:34.599085 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:00:34.693905 sshd[5151]: Connection closed by 4.153.228.146 port 33500 Jan 21 01:00:34.694301 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:34.694000 audit[5147]: USER_END pid=5147 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.695000 audit[5147]: CRED_DISP pid=5147 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:34.699450 systemd[1]: sshd@11-10.0.0.94:22-4.153.228.146:33500.service: Deactivated successfully. Jan 21 01:00:34.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.94:22-4.153.228.146:33500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:34.703207 systemd[1]: session-13.scope: Deactivated successfully. Jan 21 01:00:34.704905 systemd-logind[1652]: Session 13 logged out. Waiting for processes to exit. Jan 21 01:00:34.706281 systemd-logind[1652]: Removed session 13. Jan 21 01:00:34.798031 systemd[1]: Started sshd@12-10.0.0.94:22-4.153.228.146:44238.service - OpenSSH per-connection server daemon (4.153.228.146:44238). Jan 21 01:00:34.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.94:22-4.153.228.146:44238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.332000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.334377 sshd[5160]: Accepted publickey for core from 4.153.228.146 port 44238 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:35.334000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.334000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1eba50f0 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:35.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:35.336098 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:35.342712 systemd-logind[1652]: New session 14 of user core. Jan 21 01:00:35.349384 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 21 01:00:35.351000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.353000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.677117 sshd[5164]: Connection closed by 4.153.228.146 port 44238 Jan 21 01:00:35.677889 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:35.679000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.679000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:35.683468 systemd[1]: sshd@12-10.0.0.94:22-4.153.228.146:44238.service: Deactivated successfully. Jan 21 01:00:35.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.94:22-4.153.228.146:44238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:35.689096 systemd[1]: session-14.scope: Deactivated successfully. Jan 21 01:00:35.693393 systemd-logind[1652]: Session 14 logged out. Waiting for processes to exit. Jan 21 01:00:35.697081 systemd-logind[1652]: Removed session 14. Jan 21 01:00:39.598960 kubelet[2881]: E0121 01:00:39.598907 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:00:39.599733 kubelet[2881]: E0121 01:00:39.599310 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:00:40.791213 systemd[1]: Started sshd@13-10.0.0.94:22-4.153.228.146:44242.service - OpenSSH per-connection server daemon (4.153.228.146:44242). Jan 21 01:00:40.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.94:22-4.153.228.146:44242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:40.792580 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 21 01:00:40.792621 kernel: audit: type=1130 audit(1768957240.790:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.94:22-4.153.228.146:44242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:41.318605 sshd[5205]: Accepted publickey for core from 4.153.228.146 port 44242 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:41.317000 audit[5205]: USER_ACCT pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.320085 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:41.324202 kernel: audit: type=1101 audit(1768957241.317:795): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.329724 systemd-logind[1652]: New session 15 of user core. Jan 21 01:00:41.318000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.335740 kernel: audit: type=1103 audit(1768957241.318:796): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.338959 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 21 01:00:41.339171 kernel: audit: type=1006 audit(1768957241.318:797): pid=5205 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 21 01:00:41.318000 audit[5205]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf329b30 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:41.347174 kernel: audit: type=1300 audit(1768957241.318:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbf329b30 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:41.318000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:41.351168 kernel: audit: type=1327 audit(1768957241.318:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:41.345000 audit[5205]: USER_START pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.358169 kernel: audit: type=1105 audit(1768957241.345:798): pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.348000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.362210 kernel: audit: type=1103 audit(1768957241.348:799): pid=5209 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.678017 sshd[5209]: Connection closed by 4.153.228.146 port 44242 Jan 21 01:00:41.678829 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:41.679000 audit[5205]: USER_END pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.683667 systemd[1]: sshd@13-10.0.0.94:22-4.153.228.146:44242.service: Deactivated successfully. Jan 21 01:00:41.685865 systemd[1]: session-15.scope: Deactivated successfully. Jan 21 01:00:41.686167 kernel: audit: type=1106 audit(1768957241.679:800): pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.679000 audit[5205]: CRED_DISP pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.688803 systemd-logind[1652]: Session 15 logged out. Waiting for processes to exit. Jan 21 01:00:41.690856 kernel: audit: type=1104 audit(1768957241.679:801): pid=5205 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:41.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.94:22-4.153.228.146:44242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:41.692481 systemd-logind[1652]: Removed session 15. Jan 21 01:00:42.599030 kubelet[2881]: E0121 01:00:42.598976 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:00:43.599174 kubelet[2881]: E0121 01:00:43.599118 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:00:46.598801 kubelet[2881]: E0121 01:00:46.598747 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:00:46.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.94:22-4.153.228.146:43948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:46.791575 systemd[1]: Started sshd@14-10.0.0.94:22-4.153.228.146:43948.service - OpenSSH per-connection server daemon (4.153.228.146:43948). Jan 21 01:00:46.794168 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:00:46.794233 kernel: audit: type=1130 audit(1768957246.790:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.94:22-4.153.228.146:43948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:47.335000 audit[5221]: USER_ACCT pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.341144 sshd[5221]: Accepted publickey for core from 4.153.228.146 port 43948 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:47.341424 kernel: audit: type=1101 audit(1768957247.335:804): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.341000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.346559 kernel: audit: type=1103 audit(1768957247.341:805): pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.343325 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:47.352706 kernel: audit: type=1006 audit(1768957247.341:806): pid=5221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 21 01:00:47.341000 audit[5221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd4c4cb00 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:47.358301 kernel: audit: type=1300 audit(1768957247.341:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd4c4cb00 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:47.341000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:47.361201 kernel: audit: type=1327 audit(1768957247.341:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:47.366178 systemd-logind[1652]: New session 16 of user core. Jan 21 01:00:47.370066 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 21 01:00:47.373000 audit[5221]: USER_START pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.379191 kernel: audit: type=1105 audit(1768957247.373:807): pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.379000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.384167 kernel: audit: type=1103 audit(1768957247.379:808): pid=5225 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.611887 kubelet[2881]: E0121 01:00:47.610958 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:00:47.749174 sshd[5225]: Connection closed by 4.153.228.146 port 43948 Jan 21 01:00:47.749607 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:47.749000 audit[5221]: USER_END pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.753975 systemd[1]: sshd@14-10.0.0.94:22-4.153.228.146:43948.service: Deactivated successfully. Jan 21 01:00:47.756188 kernel: audit: type=1106 audit(1768957247.749:809): pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.756391 systemd[1]: session-16.scope: Deactivated successfully. Jan 21 01:00:47.757644 systemd-logind[1652]: Session 16 logged out. Waiting for processes to exit. Jan 21 01:00:47.759579 systemd-logind[1652]: Removed session 16. Jan 21 01:00:47.749000 audit[5221]: CRED_DISP pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.763177 kernel: audit: type=1104 audit(1768957247.749:810): pid=5221 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:47.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.94:22-4.153.228.146:43948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:50.598163 kubelet[2881]: E0121 01:00:50.598110 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:00:52.598771 kubelet[2881]: E0121 01:00:52.598700 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:00:52.866491 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:00:52.866596 kernel: audit: type=1130 audit(1768957252.862:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.94:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:52.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.94:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:52.863415 systemd[1]: Started sshd@15-10.0.0.94:22-4.153.228.146:43960.service - OpenSSH per-connection server daemon (4.153.228.146:43960). Jan 21 01:00:53.412000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.415352 sshd[5236]: Accepted publickey for core from 4.153.228.146 port 43960 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:53.418273 kernel: audit: type=1101 audit(1768957253.412:813): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.418331 kernel: audit: type=1103 audit(1768957253.417:814): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.417000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.419124 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:53.423190 kernel: audit: type=1006 audit(1768957253.417:815): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 21 01:00:53.417000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7887a070 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:53.417000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:53.430240 kernel: audit: type=1300 audit(1768957253.417:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7887a070 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:53.430280 kernel: audit: type=1327 audit(1768957253.417:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:53.432685 systemd-logind[1652]: New session 17 of user core. Jan 21 01:00:53.438351 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 21 01:00:53.439000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.442000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.447200 kernel: audit: type=1105 audit(1768957253.439:816): pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.447257 kernel: audit: type=1103 audit(1768957253.442:817): pid=5240 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.780136 sshd[5240]: Connection closed by 4.153.228.146 port 43960 Jan 21 01:00:53.780600 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:53.780000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.784242 systemd[1]: sshd@15-10.0.0.94:22-4.153.228.146:43960.service: Deactivated successfully. Jan 21 01:00:53.789499 kernel: audit: type=1106 audit(1768957253.780:818): pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.788871 systemd-logind[1652]: Session 17 logged out. Waiting for processes to exit. Jan 21 01:00:53.780000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.790758 systemd[1]: session-17.scope: Deactivated successfully. Jan 21 01:00:53.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.94:22-4.153.228.146:43960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:53.793211 systemd-logind[1652]: Removed session 17. Jan 21 01:00:53.794188 kernel: audit: type=1104 audit(1768957253.780:819): pid=5236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:53.884006 systemd[1]: Started sshd@16-10.0.0.94:22-4.153.228.146:43972.service - OpenSSH per-connection server daemon (4.153.228.146:43972). Jan 21 01:00:53.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.94:22-4.153.228.146:43972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:54.408000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:54.410937 sshd[5252]: Accepted publickey for core from 4.153.228.146 port 43972 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:54.410000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:54.410000 audit[5252]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa1590f20 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:54.410000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:54.412886 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:54.419041 systemd-logind[1652]: New session 18 of user core. Jan 21 01:00:54.425370 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 21 01:00:54.427000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:54.430000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.061188 sshd[5256]: Connection closed by 4.153.228.146 port 43972 Jan 21 01:00:55.061084 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:55.063000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.063000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.067396 systemd[1]: sshd@16-10.0.0.94:22-4.153.228.146:43972.service: Deactivated successfully. Jan 21 01:00:55.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.94:22-4.153.228.146:43972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:55.069766 systemd[1]: session-18.scope: Deactivated successfully. Jan 21 01:00:55.071805 systemd-logind[1652]: Session 18 logged out. Waiting for processes to exit. Jan 21 01:00:55.073542 systemd-logind[1652]: Removed session 18. Jan 21 01:00:55.173372 systemd[1]: Started sshd@17-10.0.0.94:22-4.153.228.146:60678.service - OpenSSH per-connection server daemon (4.153.228.146:60678). Jan 21 01:00:55.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.94:22-4.153.228.146:60678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:55.729000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.732335 sshd[5266]: Accepted publickey for core from 4.153.228.146 port 60678 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:55.732000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.734628 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:55.732000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff49af3d80 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:55.732000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:55.742146 systemd-logind[1652]: New session 19 of user core. Jan 21 01:00:55.749370 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 21 01:00:55.752000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:55.755000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:56.599536 kubelet[2881]: E0121 01:00:56.599461 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:00:56.656000 audit[5280]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:56.656000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd301acf0 a2=0 a3=7fffd301acdc items=0 ppid=3027 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:56.665000 audit[5280]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:56.665000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd301acf0 a2=0 a3=0 items=0 ppid=3027 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:56.680000 audit[5282]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:56.680000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe54591f60 a2=0 a3=7ffe54591f4c items=0 ppid=3027 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:56.686000 audit[5282]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:00:56.686000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe54591f60 a2=0 a3=0 items=0 ppid=3027 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:56.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:00:56.745813 sshd[5270]: Connection closed by 4.153.228.146 port 60678 Jan 21 01:00:56.746181 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:56.747000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:56.747000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:56.751187 systemd[1]: sshd@17-10.0.0.94:22-4.153.228.146:60678.service: Deactivated successfully. Jan 21 01:00:56.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.94:22-4.153.228.146:60678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:56.753077 systemd[1]: session-19.scope: Deactivated successfully. Jan 21 01:00:56.754759 systemd-logind[1652]: Session 19 logged out. Waiting for processes to exit. Jan 21 01:00:56.756094 systemd-logind[1652]: Removed session 19. Jan 21 01:00:56.856215 systemd[1]: Started sshd@18-10.0.0.94:22-4.153.228.146:60686.service - OpenSSH per-connection server daemon (4.153.228.146:60686). Jan 21 01:00:56.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-4.153.228.146:60686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:57.423000 audit[5287]: USER_ACCT pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.425211 sshd[5287]: Accepted publickey for core from 4.153.228.146 port 60686 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:57.424000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.424000 audit[5287]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff50774550 a2=3 a3=0 items=0 ppid=1 pid=5287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:57.424000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:57.427082 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:57.432091 systemd-logind[1652]: New session 20 of user core. Jan 21 01:00:57.440361 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 21 01:00:57.444000 audit[5287]: USER_START pid=5287 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.445000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.600415 kubelet[2881]: E0121 01:00:57.600359 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:00:57.932800 sshd[5291]: Connection closed by 4.153.228.146 port 60686 Jan 21 01:00:57.934314 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:57.938738 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 21 01:00:57.938836 kernel: audit: type=1106 audit(1768957257.934:849): pid=5287 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.934000 audit[5287]: USER_END pid=5287 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.942724 systemd[1]: sshd@18-10.0.0.94:22-4.153.228.146:60686.service: Deactivated successfully. Jan 21 01:00:57.935000 audit[5287]: CRED_DISP pid=5287 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.945805 systemd[1]: session-20.scope: Deactivated successfully. Jan 21 01:00:57.947371 kernel: audit: type=1104 audit(1768957257.935:850): pid=5287 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:57.951663 kernel: audit: type=1131 audit(1768957257.941:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-4.153.228.146:60686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:57.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-4.153.228.146:60686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:57.952014 systemd-logind[1652]: Session 20 logged out. Waiting for processes to exit. Jan 21 01:00:57.954275 systemd-logind[1652]: Removed session 20. Jan 21 01:00:58.049228 kernel: audit: type=1130 audit(1768957258.042:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-4.153.228.146:60692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:58.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-4.153.228.146:60692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:58.043399 systemd[1]: Started sshd@19-10.0.0.94:22-4.153.228.146:60692.service - OpenSSH per-connection server daemon (4.153.228.146:60692). Jan 21 01:00:58.589000 audit[5303]: USER_ACCT pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.591193 sshd[5303]: Accepted publickey for core from 4.153.228.146 port 60692 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:00:58.592584 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:00:58.590000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.597112 kernel: audit: type=1101 audit(1768957258.589:853): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.597166 kernel: audit: type=1103 audit(1768957258.590:854): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.600731 kernel: audit: type=1006 audit(1768957258.590:855): pid=5303 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 21 01:00:58.590000 audit[5303]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff380e0ef0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:58.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:58.608193 kernel: audit: type=1300 audit(1768957258.590:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff380e0ef0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:00:58.608261 kernel: audit: type=1327 audit(1768957258.590:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:00:58.607480 systemd-logind[1652]: New session 21 of user core. Jan 21 01:00:58.621492 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 21 01:00:58.625000 audit[5303]: USER_START pid=5303 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.632223 kernel: audit: type=1105 audit(1768957258.625:856): pid=5303 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.632000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.964446 sshd[5307]: Connection closed by 4.153.228.146 port 60692 Jan 21 01:00:58.964931 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Jan 21 01:00:58.964000 audit[5303]: USER_END pid=5303 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.964000 audit[5303]: CRED_DISP pid=5303 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:00:58.969373 systemd[1]: sshd@19-10.0.0.94:22-4.153.228.146:60692.service: Deactivated successfully. Jan 21 01:00:58.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-4.153.228.146:60692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:00:58.972787 systemd[1]: session-21.scope: Deactivated successfully. Jan 21 01:00:58.974518 systemd-logind[1652]: Session 21 logged out. Waiting for processes to exit. Jan 21 01:00:58.978167 systemd-logind[1652]: Removed session 21. Jan 21 01:01:01.075000 audit[5319]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:01.075000 audit[5319]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca991f2e0 a2=0 a3=7ffca991f2cc items=0 ppid=3027 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:01.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:01.080000 audit[5319]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 01:01:01.080000 audit[5319]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffca991f2e0 a2=0 a3=7ffca991f2cc items=0 ppid=3027 pid=5319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:01.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 01:01:01.600454 kubelet[2881]: E0121 01:01:01.600107 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:01:02.599095 kubelet[2881]: E0121 01:01:02.599044 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:01:03.602041 kubelet[2881]: E0121 01:01:03.601990 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:01:04.070927 systemd[1]: Started sshd@20-10.0.0.94:22-4.153.228.146:60694.service - OpenSSH per-connection server daemon (4.153.228.146:60694). Jan 21 01:01:04.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-4.153.228.146:60694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:04.072644 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 21 01:01:04.072725 kernel: audit: type=1130 audit(1768957264.070:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-4.153.228.146:60694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:04.596000 audit[5323]: USER_ACCT pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.602662 kernel: audit: type=1101 audit(1768957264.596:864): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.602748 sshd[5323]: Accepted publickey for core from 4.153.228.146 port 60694 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:01:04.601000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.603565 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:04.608205 kernel: audit: type=1103 audit(1768957264.601:865): pid=5323 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.608265 kernel: audit: type=1006 audit(1768957264.601:866): pid=5323 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 21 01:01:04.601000 audit[5323]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1af9c70 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:04.611583 kernel: audit: type=1300 audit(1768957264.601:866): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1af9c70 a2=3 a3=0 items=0 ppid=1 pid=5323 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:04.601000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:04.620565 kernel: audit: type=1327 audit(1768957264.601:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:04.620967 systemd-logind[1652]: New session 22 of user core. Jan 21 01:01:04.631370 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 01:01:04.633000 audit[5323]: USER_START pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.640692 kernel: audit: type=1105 audit(1768957264.633:867): pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.639000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.645232 kernel: audit: type=1103 audit(1768957264.639:868): pid=5327 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.945974 sshd[5327]: Connection closed by 4.153.228.146 port 60694 Jan 21 01:01:04.945902 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:04.946000 audit[5323]: USER_END pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.950964 systemd[1]: sshd@20-10.0.0.94:22-4.153.228.146:60694.service: Deactivated successfully. Jan 21 01:01:04.946000 audit[5323]: CRED_DISP pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.954256 kernel: audit: type=1106 audit(1768957264.946:869): pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.954337 kernel: audit: type=1104 audit(1768957264.946:870): pid=5323 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:04.954779 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 01:01:04.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-4.153.228.146:60694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:04.957713 systemd-logind[1652]: Session 22 logged out. Waiting for processes to exit. Jan 21 01:01:04.960025 systemd-logind[1652]: Removed session 22. Jan 21 01:01:05.599674 kubelet[2881]: E0121 01:01:05.599638 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:01:10.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.94:22-4.153.228.146:53384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:10.054414 systemd[1]: Started sshd@21-10.0.0.94:22-4.153.228.146:53384.service - OpenSSH per-connection server daemon (4.153.228.146:53384). Jan 21 01:01:10.055490 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:10.055518 kernel: audit: type=1130 audit(1768957270.053:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.94:22-4.153.228.146:53384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:10.570000 audit[5360]: USER_ACCT pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.571639 sshd[5360]: Accepted publickey for core from 4.153.228.146 port 53384 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:01:10.573499 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:10.576173 kernel: audit: type=1101 audit(1768957270.570:873): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.570000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.583280 kernel: audit: type=1103 audit(1768957270.570:874): pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.583350 kernel: audit: type=1006 audit(1768957270.570:875): pid=5360 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 01:01:10.570000 audit[5360]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd09bdffe0 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:10.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:10.591497 kernel: audit: type=1300 audit(1768957270.570:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd09bdffe0 a2=3 a3=0 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:10.591547 kernel: audit: type=1327 audit(1768957270.570:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:10.596888 systemd-logind[1652]: New session 23 of user core. Jan 21 01:01:10.600296 kubelet[2881]: E0121 01:01:10.600266 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:01:10.603356 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 01:01:10.606000 audit[5360]: USER_START pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.613179 kernel: audit: type=1105 audit(1768957270.606:876): pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.608000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.619178 kernel: audit: type=1103 audit(1768957270.608:877): pid=5364 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.919459 sshd[5364]: Connection closed by 4.153.228.146 port 53384 Jan 21 01:01:10.920354 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:10.920000 audit[5360]: USER_END pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.924000 audit[5360]: CRED_DISP pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.928781 kernel: audit: type=1106 audit(1768957270.920:878): pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.928835 kernel: audit: type=1104 audit(1768957270.924:879): pid=5360 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:10.928431 systemd[1]: sshd@21-10.0.0.94:22-4.153.228.146:53384.service: Deactivated successfully. Jan 21 01:01:10.931663 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 01:01:10.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.94:22-4.153.228.146:53384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:10.935925 systemd-logind[1652]: Session 23 logged out. Waiting for processes to exit. Jan 21 01:01:10.937123 systemd-logind[1652]: Removed session 23. Jan 21 01:01:11.602836 kubelet[2881]: E0121 01:01:11.602795 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:01:16.034364 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:16.034437 kernel: audit: type=1130 audit(1768957276.028:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.94:22-4.153.228.146:34284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:16.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.94:22-4.153.228.146:34284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:16.029407 systemd[1]: Started sshd@22-10.0.0.94:22-4.153.228.146:34284.service - OpenSSH per-connection server daemon (4.153.228.146:34284). Jan 21 01:01:16.559000 audit[5376]: USER_ACCT pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.566032 sshd[5376]: Accepted publickey for core from 4.153.228.146 port 34284 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:01:16.566363 kernel: audit: type=1101 audit(1768957276.559:882): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.569021 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:16.565000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.574171 kernel: audit: type=1103 audit(1768957276.565:883): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.574387 kernel: audit: type=1006 audit(1768957276.565:884): pid=5376 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 01:01:16.565000 audit[5376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0340dfa0 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.579588 kernel: audit: type=1300 audit(1768957276.565:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0340dfa0 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:16.565000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:16.582822 kernel: audit: type=1327 audit(1768957276.565:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:16.588532 systemd-logind[1652]: New session 24 of user core. Jan 21 01:01:16.599324 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 01:01:16.601691 kubelet[2881]: E0121 01:01:16.601576 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:01:16.603557 kubelet[2881]: E0121 01:01:16.602421 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:01:16.603658 kubelet[2881]: E0121 01:01:16.602465 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:01:16.603658 kubelet[2881]: E0121 01:01:16.602798 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:01:16.602000 audit[5376]: USER_START pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.609893 kernel: audit: type=1105 audit(1768957276.602:885): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.611000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.617176 kernel: audit: type=1103 audit(1768957276.611:886): pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.958175 sshd[5380]: Connection closed by 4.153.228.146 port 34284 Jan 21 01:01:16.957150 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:16.958000 audit[5376]: USER_END pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.966203 kernel: audit: type=1106 audit(1768957276.958:887): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.964716 systemd[1]: sshd@22-10.0.0.94:22-4.153.228.146:34284.service: Deactivated successfully. Jan 21 01:01:16.958000 audit[5376]: CRED_DISP pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.968602 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 01:01:16.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.94:22-4.153.228.146:34284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:16.971181 kernel: audit: type=1104 audit(1768957276.958:888): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:16.973075 systemd-logind[1652]: Session 24 logged out. Waiting for processes to exit. Jan 21 01:01:16.974102 systemd-logind[1652]: Removed session 24. Jan 21 01:01:21.598868 kubelet[2881]: E0121 01:01:21.598799 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:01:22.066675 systemd[1]: Started sshd@23-10.0.0.94:22-4.153.228.146:34290.service - OpenSSH per-connection server daemon (4.153.228.146:34290). Jan 21 01:01:22.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.94:22-4.153.228.146:34290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:22.068571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:22.068651 kernel: audit: type=1130 audit(1768957282.065:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.94:22-4.153.228.146:34290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:22.630000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.636499 sshd[5398]: Accepted publickey for core from 4.153.228.146 port 34290 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:01:22.637976 kernel: audit: type=1101 audit(1768957282.630:891): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.640243 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:22.637000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.646288 kernel: audit: type=1103 audit(1768957282.637:892): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.656419 kernel: audit: type=1006 audit(1768957282.637:893): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 01:01:22.657281 systemd-logind[1652]: New session 25 of user core. Jan 21 01:01:22.637000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf0dd1040 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:22.664199 kernel: audit: type=1300 audit(1768957282.637:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf0dd1040 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:22.664885 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 01:01:22.637000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:22.671196 kernel: audit: type=1327 audit(1768957282.637:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:22.669000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.678179 kernel: audit: type=1105 audit(1768957282.669:894): pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.677000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:22.684229 kernel: audit: type=1103 audit(1768957282.677:895): pid=5402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:23.027882 sshd[5402]: Connection closed by 4.153.228.146 port 34290 Jan 21 01:01:23.027771 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:23.030000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:23.037272 kernel: audit: type=1106 audit(1768957283.030:896): pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:23.037763 systemd[1]: sshd@23-10.0.0.94:22-4.153.228.146:34290.service: Deactivated successfully. Jan 21 01:01:23.041497 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 01:01:23.043344 systemd-logind[1652]: Session 25 logged out. Waiting for processes to exit. Jan 21 01:01:23.030000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:23.048499 kernel: audit: type=1104 audit(1768957283.030:897): pid=5398 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:23.049125 systemd-logind[1652]: Removed session 25. Jan 21 01:01:23.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.94:22-4.153.228.146:34290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:25.598987 containerd[1672]: time="2026-01-21T01:01:25.598775976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 21 01:01:25.958315 containerd[1672]: time="2026-01-21T01:01:25.958271795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:25.959749 containerd[1672]: time="2026-01-21T01:01:25.959718127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 21 01:01:25.959830 containerd[1672]: time="2026-01-21T01:01:25.959787013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:25.960165 kubelet[2881]: E0121 01:01:25.959964 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:01:25.962260 kubelet[2881]: E0121 01:01:25.962226 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 21 01:01:25.962398 kubelet[2881]: E0121 01:01:25.962365 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:adf2c7942adb4caa8cbe3abb5e35591e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:25.964204 kubelet[2881]: E0121 01:01:25.964137 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:01:28.139749 systemd[1]: Started sshd@24-10.0.0.94:22-4.153.228.146:33916.service - OpenSSH per-connection server daemon (4.153.228.146:33916). Jan 21 01:01:28.144711 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 01:01:28.144847 kernel: audit: type=1130 audit(1768957288.139:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.94:22-4.153.228.146:33916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:28.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.94:22-4.153.228.146:33916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:28.598951 kubelet[2881]: E0121 01:01:28.598519 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b" Jan 21 01:01:28.714000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.718662 sshd[5416]: Accepted publickey for core from 4.153.228.146 port 33916 ssh2: RSA SHA256:31xSJzLknPW0WXQ1/0HnwH4E7nhabzVasrxSAbB59go Jan 21 01:01:28.720195 kernel: audit: type=1101 audit(1768957288.714:900): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.721072 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 01:01:28.719000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.725236 kernel: audit: type=1103 audit(1768957288.719:901): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.725326 kernel: audit: type=1006 audit(1768957288.719:902): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 21 01:01:28.719000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0b73d950 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:28.733307 kernel: audit: type=1300 audit(1768957288.719:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0b73d950 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 01:01:28.719000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:28.736218 kernel: audit: type=1327 audit(1768957288.719:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 01:01:28.737983 systemd-logind[1652]: New session 26 of user core. Jan 21 01:01:28.744349 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 21 01:01:28.746000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.750000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.753361 kernel: audit: type=1105 audit(1768957288.746:903): pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:28.753531 kernel: audit: type=1103 audit(1768957288.750:904): pid=5420 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:29.082282 sshd[5420]: Connection closed by 4.153.228.146 port 33916 Jan 21 01:01:29.083358 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 21 01:01:29.091197 kernel: audit: type=1106 audit(1768957289.084:905): pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:29.084000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:29.091871 systemd[1]: sshd@24-10.0.0.94:22-4.153.228.146:33916.service: Deactivated successfully. Jan 21 01:01:29.094362 systemd[1]: session-26.scope: Deactivated successfully. Jan 21 01:01:29.095767 systemd-logind[1652]: Session 26 logged out. Waiting for processes to exit. Jan 21 01:01:29.085000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:29.101211 kernel: audit: type=1104 audit(1768957289.085:906): pid=5416 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 21 01:01:29.101541 systemd-logind[1652]: Removed session 26. Jan 21 01:01:29.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.94:22-4.153.228.146:33916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 01:01:29.601324 kubelet[2881]: E0121 01:01:29.601037 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:01:29.602560 containerd[1672]: time="2026-01-21T01:01:29.601130931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 21 01:01:29.932189 containerd[1672]: time="2026-01-21T01:01:29.932084166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:29.934313 containerd[1672]: time="2026-01-21T01:01:29.934280060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 21 01:01:29.934394 containerd[1672]: time="2026-01-21T01:01:29.934359720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:29.934629 kubelet[2881]: E0121 01:01:29.934598 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:29.934676 kubelet[2881]: E0121 01:01:29.934642 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 21 01:01:29.936284 kubelet[2881]: E0121 01:01:29.936240 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlqkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5489cbd567-6mfx7_calico-apiserver(b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:29.936852 containerd[1672]: time="2026-01-21T01:01:29.936835591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 21 01:01:29.937871 kubelet[2881]: E0121 01:01:29.937849 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:01:30.361993 containerd[1672]: time="2026-01-21T01:01:30.361858905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.363544 containerd[1672]: time="2026-01-21T01:01:30.363484443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 21 01:01:30.363544 containerd[1672]: time="2026-01-21T01:01:30.363510821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.364086 kubelet[2881]: E0121 01:01:30.363661 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:01:30.364086 kubelet[2881]: E0121 01:01:30.363697 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 21 01:01:30.364086 kubelet[2881]: E0121 01:01:30.363798 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.365892 containerd[1672]: time="2026-01-21T01:01:30.365866428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 21 01:01:30.706435 containerd[1672]: time="2026-01-21T01:01:30.706278954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:30.708005 containerd[1672]: time="2026-01-21T01:01:30.707906313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 21 01:01:30.708005 containerd[1672]: time="2026-01-21T01:01:30.707985199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:30.709283 kubelet[2881]: E0121 01:01:30.709244 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:01:30.709539 kubelet[2881]: E0121 01:01:30.709295 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 21 01:01:30.709539 kubelet[2881]: E0121 01:01:30.709407 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj4gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gng52_calico-system(4683584a-9c9b-48ab-9b3d-c5a314d23b04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:30.710603 kubelet[2881]: E0121 01:01:30.710559 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gng52" podUID="4683584a-9c9b-48ab-9b3d-c5a314d23b04" Jan 21 01:01:36.598258 kubelet[2881]: E0121 01:01:36.598210 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-xwgzw" podUID="6a600d11-4238-41cf-86d6-99ea151288a7" Jan 21 01:01:40.599728 containerd[1672]: time="2026-01-21T01:01:40.599370778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 21 01:01:40.941821 containerd[1672]: time="2026-01-21T01:01:40.941771053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:40.943474 containerd[1672]: time="2026-01-21T01:01:40.943431576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 21 01:01:40.943570 containerd[1672]: time="2026-01-21T01:01:40.943510826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:40.943741 kubelet[2881]: E0121 01:01:40.943671 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:01:40.943741 kubelet[2881]: E0121 01:01:40.943731 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 21 01:01:40.944510 kubelet[2881]: E0121 01:01:40.943883 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6snb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d559fddc-6x8n7_calico-system(181c2eed-c9fe-4d1f-ab58-c3add0b057f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:40.945954 kubelet[2881]: E0121 01:01:40.945905 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d559fddc-6x8n7" podUID="181c2eed-c9fe-4d1f-ab58-c3add0b057f7" Jan 21 01:01:41.600132 containerd[1672]: time="2026-01-21T01:01:41.599705088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 21 01:01:41.944568 containerd[1672]: time="2026-01-21T01:01:41.944409731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:41.946164 containerd[1672]: time="2026-01-21T01:01:41.946087070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 21 01:01:41.946164 containerd[1672]: time="2026-01-21T01:01:41.946138664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:41.947266 kubelet[2881]: E0121 01:01:41.947031 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:01:41.947266 kubelet[2881]: E0121 01:01:41.947074 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 21 01:01:41.947266 kubelet[2881]: E0121 01:01:41.947211 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2jpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4stx9_calico-system(ab9475e3-845f-4249-abaa-5891387a4c3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:41.949172 kubelet[2881]: E0121 01:01:41.948415 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4stx9" podUID="ab9475e3-845f-4249-abaa-5891387a4c3a" Jan 21 01:01:42.599708 kubelet[2881]: E0121 01:01:42.599657 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5489cbd567-6mfx7" podUID="b29f53d3-a9a7-483a-a4bb-96ad4d0f4f37" Jan 21 01:01:42.599938 containerd[1672]: time="2026-01-21T01:01:42.599881843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 21 01:01:42.936367 containerd[1672]: time="2026-01-21T01:01:42.936311841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 21 01:01:42.937937 containerd[1672]: time="2026-01-21T01:01:42.937894826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 21 01:01:42.938048 containerd[1672]: time="2026-01-21T01:01:42.937970957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 21 01:01:42.938170 kubelet[2881]: E0121 01:01:42.938125 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:01:42.938216 kubelet[2881]: E0121 01:01:42.938187 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 21 01:01:42.938446 kubelet[2881]: E0121 01:01:42.938412 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-fbd8b4d78-76pnr_calico-system(0723cca5-619b-4e7c-893b-f737ac25ba0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 21 01:01:42.939744 kubelet[2881]: E0121 01:01:42.939708 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fbd8b4d78-76pnr" podUID="0723cca5-619b-4e7c-893b-f737ac25ba0b"