Jan 27 05:39:20.114732 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 03:09:34 -00 2026 Jan 27 05:39:20.114764 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:39:20.114775 kernel: BIOS-provided physical RAM map: Jan 27 05:39:20.114782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 05:39:20.114788 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 05:39:20.114794 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 05:39:20.114804 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 05:39:20.114811 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 05:39:20.114817 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 05:39:20.114824 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 05:39:20.114830 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 27 05:39:20.114836 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 27 05:39:20.114843 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 27 05:39:20.114849 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 27 05:39:20.114860 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 27 05:39:20.114867 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 27 05:39:20.114873 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 27 05:39:20.114880 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 27 05:39:20.114887 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 27 05:39:20.114893 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 27 05:39:20.114903 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 27 05:39:20.114909 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 27 05:39:20.114916 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 27 05:39:20.114923 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 27 05:39:20.114929 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 05:39:20.114936 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 05:39:20.114942 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 05:39:20.114949 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 27 05:39:20.114956 kernel: NX (Execute Disable) protection: active Jan 27 05:39:20.114962 kernel: APIC: Static calls initialized Jan 27 05:39:20.114969 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 27 05:39:20.114978 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 27 05:39:20.114985 kernel: extended physical RAM map: Jan 27 05:39:20.114992 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 27 05:39:20.114998 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 27 05:39:20.115005 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 27 05:39:20.115012 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 27 05:39:20.115018 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 27 05:39:20.115025 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 27 05:39:20.115032 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 27 05:39:20.115044 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 27 05:39:20.115051 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 27 05:39:20.115058 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 27 05:39:20.115065 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 27 05:39:20.115073 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 27 05:39:20.115080 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 27 05:39:20.115087 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 27 05:39:20.115094 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 27 05:39:20.115101 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 27 05:39:20.115108 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 27 05:39:20.115115 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 27 05:39:20.115122 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 27 05:39:20.115129 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 27 05:39:20.115136 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 27 05:39:20.115143 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 27 05:39:20.115152 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 27 05:39:20.115159 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 27 05:39:20.115166 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 27 05:39:20.115251 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 27 05:39:20.115259 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 05:39:20.115266 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 27 05:39:20.115273 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 27 05:39:20.115280 kernel: efi: EFI v2.7 by EDK II Jan 27 05:39:20.115287 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 27 05:39:20.115294 kernel: random: crng init done Jan 27 05:39:20.115301 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 27 05:39:20.115311 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 27 05:39:20.115318 kernel: secureboot: Secure boot disabled Jan 27 05:39:20.115325 kernel: SMBIOS 2.8 present. Jan 27 05:39:20.115332 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 27 05:39:20.115339 kernel: DMI: Memory slots populated: 1/1 Jan 27 05:39:20.115346 kernel: Hypervisor detected: KVM Jan 27 05:39:20.115353 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 27 05:39:20.115360 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 27 05:39:20.115367 kernel: kvm-clock: using sched offset of 5740295769 cycles Jan 27 05:39:20.115374 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 27 05:39:20.115384 kernel: tsc: Detected 2294.608 MHz processor Jan 27 05:39:20.115392 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 27 05:39:20.115400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 27 05:39:20.115408 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 27 05:39:20.115415 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 27 05:39:20.115423 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 27 05:39:20.115431 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 27 05:39:20.115438 kernel: Using GB pages for direct mapping Jan 27 05:39:20.115448 kernel: ACPI: Early table checksum verification disabled Jan 27 05:39:20.115455 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 27 05:39:20.115463 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 27 05:39:20.115471 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:39:20.115478 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:39:20.115485 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 27 05:39:20.115493 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:39:20.115502 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:39:20.115510 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 05:39:20.115517 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 27 05:39:20.115525 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 27 05:39:20.115532 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 27 05:39:20.115540 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 27 05:39:20.115548 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 27 05:39:20.115556 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 27 05:39:20.115564 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 27 05:39:20.115571 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 27 05:39:20.115579 kernel: No NUMA configuration found Jan 27 05:39:20.115586 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 27 05:39:20.115594 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 27 05:39:20.115601 kernel: Zone ranges: Jan 27 05:39:20.115609 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 27 05:39:20.115618 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 27 05:39:20.115625 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 27 05:39:20.115633 kernel: Device empty Jan 27 05:39:20.115640 kernel: Movable zone start for each node Jan 27 05:39:20.115648 kernel: Early memory node ranges Jan 27 05:39:20.115655 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 27 05:39:20.115662 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 27 05:39:20.115671 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 27 05:39:20.115679 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 27 05:39:20.115686 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 27 05:39:20.115694 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 27 05:39:20.115701 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 27 05:39:20.115715 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 27 05:39:20.115724 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 27 05:39:20.115732 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 27 05:39:20.115739 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 27 05:39:20.115747 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 05:39:20.115757 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 27 05:39:20.115765 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 27 05:39:20.115773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 05:39:20.115781 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 27 05:39:20.115790 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 27 05:39:20.115798 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 27 05:39:20.115806 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 27 05:39:20.115814 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 27 05:39:20.115822 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 27 05:39:20.115831 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 27 05:39:20.115839 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 27 05:39:20.115847 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 27 05:39:20.115857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 27 05:39:20.115865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 27 05:39:20.115873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 27 05:39:20.115881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 27 05:39:20.115889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 27 05:39:20.117190 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 27 05:39:20.117208 kernel: TSC deadline timer available Jan 27 05:39:20.117220 kernel: CPU topo: Max. logical packages: 2 Jan 27 05:39:20.117228 kernel: CPU topo: Max. logical dies: 2 Jan 27 05:39:20.117236 kernel: CPU topo: Max. dies per package: 1 Jan 27 05:39:20.117244 kernel: CPU topo: Max. threads per core: 1 Jan 27 05:39:20.117252 kernel: CPU topo: Num. cores per package: 1 Jan 27 05:39:20.117261 kernel: CPU topo: Num. threads per package: 1 Jan 27 05:39:20.117268 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 27 05:39:20.117278 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 27 05:39:20.117287 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 27 05:39:20.117295 kernel: kvm-guest: setup PV sched yield Jan 27 05:39:20.117303 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 27 05:39:20.117311 kernel: Booting paravirtualized kernel on KVM Jan 27 05:39:20.117319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 27 05:39:20.117328 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 27 05:39:20.117336 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 27 05:39:20.117346 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 27 05:39:20.117355 kernel: pcpu-alloc: [0] 0 1 Jan 27 05:39:20.117363 kernel: kvm-guest: PV spinlocks enabled Jan 27 05:39:20.117371 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 27 05:39:20.117381 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:39:20.117389 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 27 05:39:20.117399 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 27 05:39:20.117408 kernel: Fallback order for Node 0: 0 Jan 27 05:39:20.117416 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 27 05:39:20.117424 kernel: Policy zone: Normal Jan 27 05:39:20.117432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 27 05:39:20.117441 kernel: software IO TLB: area num 2. Jan 27 05:39:20.117449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 27 05:39:20.117459 kernel: ftrace: allocating 40128 entries in 157 pages Jan 27 05:39:20.117467 kernel: ftrace: allocated 157 pages with 5 groups Jan 27 05:39:20.117475 kernel: Dynamic Preempt: voluntary Jan 27 05:39:20.117483 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 27 05:39:20.117493 kernel: rcu: RCU event tracing is enabled. Jan 27 05:39:20.117501 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 27 05:39:20.117509 kernel: Trampoline variant of Tasks RCU enabled. Jan 27 05:39:20.117517 kernel: Rude variant of Tasks RCU enabled. Jan 27 05:39:20.117526 kernel: Tracing variant of Tasks RCU enabled. Jan 27 05:39:20.117535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 27 05:39:20.117542 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 27 05:39:20.117551 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:39:20.117559 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:39:20.117567 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 27 05:39:20.117575 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 27 05:39:20.117585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 27 05:39:20.117594 kernel: Console: colour dummy device 80x25 Jan 27 05:39:20.117602 kernel: printk: legacy console [tty0] enabled Jan 27 05:39:20.117610 kernel: printk: legacy console [ttyS0] enabled Jan 27 05:39:20.117618 kernel: ACPI: Core revision 20240827 Jan 27 05:39:20.117627 kernel: APIC: Switch to symmetric I/O mode setup Jan 27 05:39:20.117635 kernel: x2apic enabled Jan 27 05:39:20.117643 kernel: APIC: Switched APIC routing to: physical x2apic Jan 27 05:39:20.117653 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 27 05:39:20.117661 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 27 05:39:20.117670 kernel: kvm-guest: setup PV IPIs Jan 27 05:39:20.117678 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 27 05:39:20.117686 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 27 05:39:20.117695 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 27 05:39:20.117704 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 27 05:39:20.117712 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 27 05:39:20.117720 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 27 05:39:20.117728 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 27 05:39:20.117735 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 27 05:39:20.117743 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 27 05:39:20.117751 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 27 05:39:20.117758 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 27 05:39:20.117766 kernel: TAA: Mitigation: Clear CPU buffers Jan 27 05:39:20.117774 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 27 05:39:20.117781 kernel: active return thunk: its_return_thunk Jan 27 05:39:20.117791 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 27 05:39:20.117798 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 27 05:39:20.117806 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 27 05:39:20.117814 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 27 05:39:20.117822 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 27 05:39:20.117830 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 27 05:39:20.117837 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 27 05:39:20.117845 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 27 05:39:20.117852 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 27 05:39:20.117862 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 27 05:39:20.117869 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 27 05:39:20.117877 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 27 05:39:20.117884 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 27 05:39:20.117892 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 27 05:39:20.117900 kernel: Freeing SMP alternatives memory: 32K Jan 27 05:39:20.117907 kernel: pid_max: default: 32768 minimum: 301 Jan 27 05:39:20.117915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 27 05:39:20.117923 kernel: landlock: Up and running. Jan 27 05:39:20.117930 kernel: SELinux: Initializing. Jan 27 05:39:20.117938 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 05:39:20.117946 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 27 05:39:20.117955 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 27 05:39:20.117963 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 27 05:39:20.117971 kernel: ... version: 2 Jan 27 05:39:20.117979 kernel: ... bit width: 48 Jan 27 05:39:20.117987 kernel: ... generic registers: 8 Jan 27 05:39:20.117995 kernel: ... value mask: 0000ffffffffffff Jan 27 05:39:20.118003 kernel: ... max period: 00007fffffffffff Jan 27 05:39:20.118013 kernel: ... fixed-purpose events: 3 Jan 27 05:39:20.118021 kernel: ... event mask: 00000007000000ff Jan 27 05:39:20.118030 kernel: signal: max sigframe size: 3632 Jan 27 05:39:20.118037 kernel: rcu: Hierarchical SRCU implementation. Jan 27 05:39:20.118046 kernel: rcu: Max phase no-delay instances is 400. Jan 27 05:39:20.118054 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 27 05:39:20.118062 kernel: smp: Bringing up secondary CPUs ... Jan 27 05:39:20.118070 kernel: smpboot: x86: Booting SMP configuration: Jan 27 05:39:20.118080 kernel: .... node #0, CPUs: #1 Jan 27 05:39:20.118088 kernel: smp: Brought up 1 node, 2 CPUs Jan 27 05:39:20.118096 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 27 05:39:20.118105 kernel: Memory: 3969764K/4186776K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 212136K reserved, 0K cma-reserved) Jan 27 05:39:20.118113 kernel: devtmpfs: initialized Jan 27 05:39:20.118121 kernel: x86/mm: Memory block size: 128MB Jan 27 05:39:20.118129 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 27 05:39:20.118139 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 27 05:39:20.118148 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 27 05:39:20.118156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 27 05:39:20.118164 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 27 05:39:20.118183 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 27 05:39:20.118197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 27 05:39:20.118206 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 27 05:39:20.118216 kernel: pinctrl core: initialized pinctrl subsystem Jan 27 05:39:20.118224 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 27 05:39:20.118233 kernel: audit: initializing netlink subsys (disabled) Jan 27 05:39:20.118241 kernel: audit: type=2000 audit(1769492356.796:1): state=initialized audit_enabled=0 res=1 Jan 27 05:39:20.118249 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 27 05:39:20.118256 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 27 05:39:20.118265 kernel: cpuidle: using governor menu Jan 27 05:39:20.118274 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 27 05:39:20.118282 kernel: dca service started, version 1.12.1 Jan 27 05:39:20.118290 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 27 05:39:20.118299 kernel: PCI: Using configuration type 1 for base access Jan 27 05:39:20.118307 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 27 05:39:20.118315 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 27 05:39:20.118323 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 27 05:39:20.118332 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 27 05:39:20.118341 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 27 05:39:20.118349 kernel: ACPI: Added _OSI(Module Device) Jan 27 05:39:20.118357 kernel: ACPI: Added _OSI(Processor Device) Jan 27 05:39:20.118365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 27 05:39:20.118373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 27 05:39:20.118382 kernel: ACPI: Interpreter enabled Jan 27 05:39:20.118392 kernel: ACPI: PM: (supports S0 S3 S5) Jan 27 05:39:20.118400 kernel: ACPI: Using IOAPIC for interrupt routing Jan 27 05:39:20.118408 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 27 05:39:20.118416 kernel: PCI: Using E820 reservations for host bridge windows Jan 27 05:39:20.118425 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 27 05:39:20.118433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 27 05:39:20.118608 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 27 05:39:20.118715 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 27 05:39:20.118812 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 27 05:39:20.118822 kernel: PCI host bridge to bus 0000:00 Jan 27 05:39:20.118921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 27 05:39:20.119009 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 27 05:39:20.119098 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 27 05:39:20.120854 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 27 05:39:20.120963 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 27 05:39:20.121052 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 27 05:39:20.121138 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 27 05:39:20.121268 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 27 05:39:20.121379 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 27 05:39:20.121619 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 27 05:39:20.121725 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 27 05:39:20.121822 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 27 05:39:20.121917 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 27 05:39:20.122017 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 27 05:39:20.122122 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.122231 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 27 05:39:20.122329 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:39:20.123279 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 27 05:39:20.123389 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 27 05:39:20.123491 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.123598 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.123701 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 27 05:39:20.123798 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:39:20.123892 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 27 05:39:20.123990 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:39:20.124094 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.124200 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 27 05:39:20.127289 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:39:20.127398 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 27 05:39:20.127497 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:39:20.127606 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.127706 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 27 05:39:20.127804 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:39:20.127903 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 27 05:39:20.128000 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:39:20.128102 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.128243 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 27 05:39:20.128344 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:39:20.128440 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 27 05:39:20.128535 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:39:20.128650 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.128748 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 27 05:39:20.128846 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:39:20.128941 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 27 05:39:20.129036 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:39:20.129140 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.129246 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 27 05:39:20.129346 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:39:20.129441 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 27 05:39:20.129536 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:39:20.129636 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.129732 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 27 05:39:20.129827 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:39:20.129925 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 27 05:39:20.130022 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:39:20.130123 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.130263 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 27 05:39:20.130361 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:39:20.130458 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 27 05:39:20.130556 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:39:20.130661 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.130770 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 27 05:39:20.130867 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:39:20.130962 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 27 05:39:20.131057 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:39:20.131160 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.131265 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 27 05:39:20.131360 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:39:20.131454 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 27 05:39:20.131549 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:39:20.131650 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.131748 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 27 05:39:20.131845 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:39:20.131939 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 27 05:39:20.132036 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:39:20.132135 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.132238 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 27 05:39:20.132333 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:39:20.132427 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 27 05:39:20.132521 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:39:20.132635 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.132731 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 27 05:39:20.132825 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:39:20.132921 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 27 05:39:20.133020 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:39:20.133123 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.135263 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 27 05:39:20.135390 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:39:20.135488 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 27 05:39:20.135584 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:39:20.135687 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.135783 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 27 05:39:20.135882 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:39:20.135976 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 27 05:39:20.136070 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:39:20.136170 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.136287 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 27 05:39:20.136382 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:39:20.136479 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 27 05:39:20.136595 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:39:20.136696 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.136792 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 27 05:39:20.136887 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:39:20.136984 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 27 05:39:20.137082 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:39:20.137195 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.137292 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 27 05:39:20.137386 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:39:20.137482 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 27 05:39:20.137576 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:39:20.137680 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.137776 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 27 05:39:20.137870 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:39:20.137963 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 27 05:39:20.138058 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:39:20.138157 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.139341 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 27 05:39:20.139444 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:39:20.139539 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 27 05:39:20.139633 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:39:20.139734 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.139830 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 27 05:39:20.139927 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:39:20.140021 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 27 05:39:20.140115 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:39:20.140233 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.140332 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 27 05:39:20.140426 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:39:20.140521 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 27 05:39:20.140626 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:39:20.140728 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.140826 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 27 05:39:20.140921 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:39:20.141015 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 27 05:39:20.141109 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:39:20.142057 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.142166 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 27 05:39:20.142281 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:39:20.142376 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 27 05:39:20.142471 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:39:20.142572 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.142668 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 27 05:39:20.142763 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:39:20.142861 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 27 05:39:20.142956 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:39:20.143060 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.143155 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 27 05:39:20.144551 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:39:20.144673 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 27 05:39:20.144776 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:39:20.144880 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.144979 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 27 05:39:20.145073 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:39:20.145167 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 27 05:39:20.145380 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:39:20.145486 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 05:39:20.145582 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 27 05:39:20.145680 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:39:20.145774 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 27 05:39:20.145869 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:39:20.145971 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 27 05:39:20.146071 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 27 05:39:20.146185 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 27 05:39:20.146284 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 27 05:39:20.146379 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 27 05:39:20.146478 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 27 05:39:20.146575 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 27 05:39:20.146683 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 27 05:39:20.146781 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 27 05:39:20.147764 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:39:20.147873 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 27 05:39:20.147970 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 27 05:39:20.148071 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.148168 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:39:20.148299 kernel: pci_bus 0000:02: extended config space not accessible Jan 27 05:39:20.148312 kernel: acpiphp: Slot [1] registered Jan 27 05:39:20.148321 kernel: acpiphp: Slot [0] registered Jan 27 05:39:20.148331 kernel: acpiphp: Slot [2] registered Jan 27 05:39:20.148342 kernel: acpiphp: Slot [3] registered Jan 27 05:39:20.148351 kernel: acpiphp: Slot [4] registered Jan 27 05:39:20.148360 kernel: acpiphp: Slot [5] registered Jan 27 05:39:20.148368 kernel: acpiphp: Slot [6] registered Jan 27 05:39:20.148376 kernel: acpiphp: Slot [7] registered Jan 27 05:39:20.148385 kernel: acpiphp: Slot [8] registered Jan 27 05:39:20.148394 kernel: acpiphp: Slot [9] registered Jan 27 05:39:20.148402 kernel: acpiphp: Slot [10] registered Jan 27 05:39:20.148413 kernel: acpiphp: Slot [11] registered Jan 27 05:39:20.148421 kernel: acpiphp: Slot [12] registered Jan 27 05:39:20.148430 kernel: acpiphp: Slot [13] registered Jan 27 05:39:20.148438 kernel: acpiphp: Slot [14] registered Jan 27 05:39:20.148447 kernel: acpiphp: Slot [15] registered Jan 27 05:39:20.148455 kernel: acpiphp: Slot [16] registered Jan 27 05:39:20.148464 kernel: acpiphp: Slot [17] registered Jan 27 05:39:20.148474 kernel: acpiphp: Slot [18] registered Jan 27 05:39:20.148482 kernel: acpiphp: Slot [19] registered Jan 27 05:39:20.148491 kernel: acpiphp: Slot [20] registered Jan 27 05:39:20.148499 kernel: acpiphp: Slot [21] registered Jan 27 05:39:20.148508 kernel: acpiphp: Slot [22] registered Jan 27 05:39:20.148516 kernel: acpiphp: Slot [23] registered Jan 27 05:39:20.148525 kernel: acpiphp: Slot [24] registered Jan 27 05:39:20.148535 kernel: acpiphp: Slot [25] registered Jan 27 05:39:20.148543 kernel: acpiphp: Slot [26] registered Jan 27 05:39:20.148552 kernel: acpiphp: Slot [27] registered Jan 27 05:39:20.148571 kernel: acpiphp: Slot [28] registered Jan 27 05:39:20.148579 kernel: acpiphp: Slot [29] registered Jan 27 05:39:20.148587 kernel: acpiphp: Slot [30] registered Jan 27 05:39:20.148596 kernel: acpiphp: Slot [31] registered Jan 27 05:39:20.148707 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 27 05:39:20.148814 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 27 05:39:20.148914 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:39:20.148927 kernel: acpiphp: Slot [0-2] registered Jan 27 05:39:20.149030 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 27 05:39:20.149130 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 27 05:39:20.149242 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 27 05:39:20.149344 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 27 05:39:20.149443 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:39:20.149455 kernel: acpiphp: Slot [0-3] registered Jan 27 05:39:20.149559 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 27 05:39:20.149659 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 27 05:39:20.149757 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 27 05:39:20.149857 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:39:20.149868 kernel: acpiphp: Slot [0-4] registered Jan 27 05:39:20.149969 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 27 05:39:20.150067 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 27 05:39:20.150165 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:39:20.150193 kernel: acpiphp: Slot [0-5] registered Jan 27 05:39:20.150297 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 27 05:39:20.150396 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 27 05:39:20.150493 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 27 05:39:20.150590 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:39:20.150601 kernel: acpiphp: Slot [0-6] registered Jan 27 05:39:20.150700 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:39:20.150712 kernel: acpiphp: Slot [0-7] registered Jan 27 05:39:20.150805 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:39:20.150817 kernel: acpiphp: Slot [0-8] registered Jan 27 05:39:20.150912 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:39:20.150924 kernel: acpiphp: Slot [0-9] registered Jan 27 05:39:20.151019 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:39:20.151032 kernel: acpiphp: Slot [0-10] registered Jan 27 05:39:20.151128 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:39:20.151139 kernel: acpiphp: Slot [0-11] registered Jan 27 05:39:20.152091 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:39:20.152106 kernel: acpiphp: Slot [0-12] registered Jan 27 05:39:20.152228 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:39:20.152245 kernel: acpiphp: Slot [0-13] registered Jan 27 05:39:20.152350 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:39:20.152362 kernel: acpiphp: Slot [0-14] registered Jan 27 05:39:20.152457 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:39:20.152468 kernel: acpiphp: Slot [0-15] registered Jan 27 05:39:20.152571 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:39:20.152585 kernel: acpiphp: Slot [0-16] registered Jan 27 05:39:20.152681 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:39:20.152694 kernel: acpiphp: Slot [0-17] registered Jan 27 05:39:20.152788 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:39:20.152799 kernel: acpiphp: Slot [0-18] registered Jan 27 05:39:20.152895 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:39:20.152906 kernel: acpiphp: Slot [0-19] registered Jan 27 05:39:20.153004 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:39:20.153015 kernel: acpiphp: Slot [0-20] registered Jan 27 05:39:20.153111 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:39:20.153123 kernel: acpiphp: Slot [0-21] registered Jan 27 05:39:20.153225 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:39:20.153236 kernel: acpiphp: Slot [0-22] registered Jan 27 05:39:20.153391 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:39:20.153403 kernel: acpiphp: Slot [0-23] registered Jan 27 05:39:20.153498 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:39:20.153509 kernel: acpiphp: Slot [0-24] registered Jan 27 05:39:20.153605 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:39:20.153617 kernel: acpiphp: Slot [0-25] registered Jan 27 05:39:20.153711 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:39:20.153725 kernel: acpiphp: Slot [0-26] registered Jan 27 05:39:20.153819 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:39:20.153831 kernel: acpiphp: Slot [0-27] registered Jan 27 05:39:20.153923 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:39:20.153934 kernel: acpiphp: Slot [0-28] registered Jan 27 05:39:20.154027 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:39:20.154041 kernel: acpiphp: Slot [0-29] registered Jan 27 05:39:20.154137 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:39:20.154148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 27 05:39:20.154157 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 27 05:39:20.154165 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 27 05:39:20.155197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 27 05:39:20.155210 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 27 05:39:20.155222 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 27 05:39:20.155231 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 27 05:39:20.155240 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 27 05:39:20.155249 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 27 05:39:20.155257 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 27 05:39:20.155266 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 27 05:39:20.155275 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 27 05:39:20.155286 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 27 05:39:20.155294 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 27 05:39:20.155303 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 27 05:39:20.155311 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 27 05:39:20.155320 kernel: iommu: Default domain type: Translated Jan 27 05:39:20.155329 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 27 05:39:20.155337 kernel: efivars: Registered efivars operations Jan 27 05:39:20.155348 kernel: PCI: Using ACPI for IRQ routing Jan 27 05:39:20.155357 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 27 05:39:20.155366 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 27 05:39:20.155374 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 27 05:39:20.155382 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 27 05:39:20.155391 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 27 05:39:20.155399 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 27 05:39:20.155409 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 27 05:39:20.155418 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 27 05:39:20.155427 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 27 05:39:20.155435 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 27 05:39:20.155556 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 27 05:39:20.155672 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 27 05:39:20.155781 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 27 05:39:20.155796 kernel: vgaarb: loaded Jan 27 05:39:20.155805 kernel: clocksource: Switched to clocksource kvm-clock Jan 27 05:39:20.155814 kernel: VFS: Disk quotas dquot_6.6.0 Jan 27 05:39:20.155823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 27 05:39:20.155831 kernel: pnp: PnP ACPI init Jan 27 05:39:20.155940 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 27 05:39:20.155956 kernel: pnp: PnP ACPI: found 5 devices Jan 27 05:39:20.155965 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 27 05:39:20.155974 kernel: NET: Registered PF_INET protocol family Jan 27 05:39:20.155983 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 27 05:39:20.155991 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 27 05:39:20.156000 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 27 05:39:20.156009 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 27 05:39:20.156020 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 27 05:39:20.156029 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 27 05:39:20.156038 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 05:39:20.156047 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 27 05:39:20.156055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 27 05:39:20.156064 kernel: NET: Registered PF_XDP protocol family Jan 27 05:39:20.156166 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 27 05:39:20.156275 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 27 05:39:20.156374 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 27 05:39:20.156474 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 27 05:39:20.156581 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 27 05:39:20.156680 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 27 05:39:20.156779 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 27 05:39:20.156877 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 27 05:39:20.156978 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 27 05:39:20.157076 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 27 05:39:20.157903 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 27 05:39:20.158022 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 27 05:39:20.158121 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 27 05:39:20.158230 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 27 05:39:20.158334 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 27 05:39:20.158433 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 27 05:39:20.158528 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 27 05:39:20.158625 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 27 05:39:20.158721 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 27 05:39:20.158818 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 27 05:39:20.158918 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 27 05:39:20.159016 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 27 05:39:20.159112 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 27 05:39:20.159221 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 27 05:39:20.159318 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 27 05:39:20.159415 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 27 05:39:20.159512 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 27 05:39:20.159616 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 27 05:39:20.159713 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 27 05:39:20.159809 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 27 05:39:20.159905 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 27 05:39:20.159999 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 27 05:39:20.160094 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 27 05:39:20.160209 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 27 05:39:20.160307 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 27 05:39:20.160402 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 27 05:39:20.160498 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 27 05:39:20.160604 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 27 05:39:20.160700 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 27 05:39:20.160800 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 27 05:39:20.160895 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 27 05:39:20.160990 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 27 05:39:20.161086 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.164664 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.164798 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.164897 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.164999 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.165095 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.165204 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.165300 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.165395 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.165491 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.165590 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.165685 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.165779 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.165874 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.165969 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.166063 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.166158 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.166263 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.166359 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.166454 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.166548 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.166702 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.166891 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.167027 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.167144 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.167259 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.167354 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.167449 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.167544 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.167640 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.167737 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 27 05:39:20.167831 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 27 05:39:20.167925 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 27 05:39:20.168019 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 27 05:39:20.171716 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 27 05:39:20.171820 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 27 05:39:20.171920 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 27 05:39:20.172016 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 27 05:39:20.172111 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 27 05:39:20.172218 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 27 05:39:20.172314 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 27 05:39:20.172408 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 27 05:39:20.172505 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 27 05:39:20.172624 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.172718 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.172814 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.172908 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173002 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.173097 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173203 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.173298 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173394 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.173488 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173583 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.173678 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173773 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.173870 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.173966 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.174063 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.174158 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.174260 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.174357 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.174454 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.174549 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.174643 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.174739 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.174834 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.174928 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.175025 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.175121 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.175226 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.175321 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 27 05:39:20.175416 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 27 05:39:20.175518 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 05:39:20.175615 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 27 05:39:20.175714 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 27 05:39:20.175810 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.175904 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 05:39:20.175998 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 27 05:39:20.176102 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 27 05:39:20.176209 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.176322 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 27 05:39:20.176421 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 05:39:20.176515 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 27 05:39:20.176616 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:39:20.176710 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 05:39:20.176804 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 27 05:39:20.176898 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:39:20.176992 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 05:39:20.177085 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 27 05:39:20.177197 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:39:20.177292 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 05:39:20.177385 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 27 05:39:20.177478 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:39:20.177571 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 05:39:20.177664 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 27 05:39:20.177758 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:39:20.177856 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 05:39:20.177950 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 27 05:39:20.178044 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:39:20.178138 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 05:39:20.178239 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 27 05:39:20.178336 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:39:20.178431 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 27 05:39:20.178525 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 27 05:39:20.178620 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:39:20.178713 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 27 05:39:20.178807 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 27 05:39:20.178900 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:39:20.178994 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 27 05:39:20.179090 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 27 05:39:20.179196 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:39:20.179293 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 27 05:39:20.179386 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 27 05:39:20.179480 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:39:20.179574 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 27 05:39:20.179667 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 27 05:39:20.179762 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:39:20.179857 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 27 05:39:20.179951 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 27 05:39:20.180045 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:39:20.180141 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 27 05:39:20.180253 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 27 05:39:20.180349 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:39:20.180447 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 27 05:39:20.180545 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 27 05:39:20.180683 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:39:20.180796 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 27 05:39:20.180891 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 27 05:39:20.180985 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 27 05:39:20.181079 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:39:20.181183 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 27 05:39:20.181277 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 27 05:39:20.181371 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 27 05:39:20.181464 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:39:20.181558 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 27 05:39:20.181651 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 27 05:39:20.181754 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 27 05:39:20.181848 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:39:20.181946 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 27 05:39:20.182039 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 27 05:39:20.182133 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 27 05:39:20.182252 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:39:20.182351 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 27 05:39:20.182445 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 27 05:39:20.182539 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 27 05:39:20.182634 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:39:20.182729 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 27 05:39:20.182824 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 27 05:39:20.182920 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 27 05:39:20.183014 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:39:20.183110 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 27 05:39:20.183227 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 27 05:39:20.183326 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 27 05:39:20.183420 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:39:20.183519 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 27 05:39:20.183615 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 27 05:39:20.183710 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 27 05:39:20.183805 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:39:20.183900 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 27 05:39:20.183994 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 27 05:39:20.184092 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 27 05:39:20.184210 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:39:20.184307 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 27 05:39:20.184401 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 27 05:39:20.184495 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 27 05:39:20.184599 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:39:20.184699 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 27 05:39:20.184794 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 27 05:39:20.184887 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 27 05:39:20.184980 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:39:20.185075 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 27 05:39:20.185169 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 27 05:39:20.185273 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 27 05:39:20.185368 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:39:20.185463 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 27 05:39:20.185556 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 27 05:39:20.185650 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 27 05:39:20.185743 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:39:20.185839 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 27 05:39:20.185926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 27 05:39:20.186012 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 27 05:39:20.186098 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 27 05:39:20.186251 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 27 05:39:20.186386 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 27 05:39:20.186490 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 27 05:39:20.186586 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 27 05:39:20.186676 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.186773 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 27 05:39:20.186866 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 27 05:39:20.186958 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 27 05:39:20.187057 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 27 05:39:20.187147 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 27 05:39:20.187253 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 27 05:39:20.187344 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 27 05:39:20.187442 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 27 05:39:20.187534 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 27 05:39:20.187630 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 27 05:39:20.187721 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 27 05:39:20.187815 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 27 05:39:20.187909 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 27 05:39:20.188003 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 27 05:39:20.188095 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 27 05:39:20.190113 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 27 05:39:20.190245 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 27 05:39:20.190343 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 27 05:39:20.190433 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 27 05:39:20.190536 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 27 05:39:20.190626 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 27 05:39:20.190721 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 27 05:39:20.190809 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 27 05:39:20.190906 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 27 05:39:20.191020 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 27 05:39:20.191117 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 27 05:39:20.191233 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 27 05:39:20.191331 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 27 05:39:20.191423 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 27 05:39:20.191519 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 27 05:39:20.191608 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 27 05:39:20.191709 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 27 05:39:20.191799 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 27 05:39:20.191892 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 27 05:39:20.191984 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 27 05:39:20.192073 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 27 05:39:20.192167 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 27 05:39:20.193332 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 27 05:39:20.193426 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 27 05:39:20.193531 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 27 05:39:20.193621 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 27 05:39:20.193710 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 27 05:39:20.193803 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 27 05:39:20.193893 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 27 05:39:20.193982 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 27 05:39:20.194079 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 27 05:39:20.194170 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 27 05:39:20.195795 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 27 05:39:20.195893 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 27 05:39:20.195983 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 27 05:39:20.196072 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 27 05:39:20.196198 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 27 05:39:20.196292 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 27 05:39:20.196381 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 27 05:39:20.196474 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 27 05:39:20.196574 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 27 05:39:20.196666 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 27 05:39:20.196760 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 27 05:39:20.196850 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 27 05:39:20.196939 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 27 05:39:20.197033 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 27 05:39:20.197123 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 27 05:39:20.197899 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 27 05:39:20.198004 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 27 05:39:20.198094 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 27 05:39:20.198192 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 27 05:39:20.198289 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 27 05:39:20.198383 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 27 05:39:20.198471 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 27 05:39:20.198589 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 27 05:39:20.198683 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 27 05:39:20.198771 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 27 05:39:20.198783 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 27 05:39:20.198794 kernel: PCI: CLS 0 bytes, default 64 Jan 27 05:39:20.198803 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 27 05:39:20.198812 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 27 05:39:20.198821 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 27 05:39:20.198830 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 27 05:39:20.198839 kernel: Initialise system trusted keyrings Jan 27 05:39:20.198848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 27 05:39:20.198859 kernel: Key type asymmetric registered Jan 27 05:39:20.198867 kernel: Asymmetric key parser 'x509' registered Jan 27 05:39:20.198876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 27 05:39:20.198885 kernel: io scheduler mq-deadline registered Jan 27 05:39:20.198894 kernel: io scheduler kyber registered Jan 27 05:39:20.198903 kernel: io scheduler bfq registered Jan 27 05:39:20.199008 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 27 05:39:20.199111 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 27 05:39:20.199240 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 27 05:39:20.199340 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 27 05:39:20.199439 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 27 05:39:20.199536 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 27 05:39:20.199637 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 27 05:39:20.199732 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 27 05:39:20.199828 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 27 05:39:20.199923 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 27 05:39:20.200020 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 27 05:39:20.200117 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 27 05:39:20.200221 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 27 05:39:20.200318 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 27 05:39:20.200413 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 27 05:39:20.200510 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 27 05:39:20.200523 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 27 05:39:20.200630 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 27 05:39:20.200728 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 27 05:39:20.200825 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 27 05:39:20.200920 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 27 05:39:20.201019 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 27 05:39:20.201115 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 27 05:39:20.204635 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 27 05:39:20.204749 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 27 05:39:20.204850 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 27 05:39:20.204953 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 27 05:39:20.205050 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 27 05:39:20.205146 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 27 05:39:20.209268 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 27 05:39:20.209375 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 27 05:39:20.209479 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 27 05:39:20.209577 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 27 05:39:20.209589 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 27 05:39:20.209685 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 27 05:39:20.209781 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 27 05:39:20.209879 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 27 05:39:20.209974 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 27 05:39:20.210074 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 27 05:39:20.210170 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 27 05:39:20.210275 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 27 05:39:20.210371 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 27 05:39:20.210467 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 27 05:39:20.210562 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 27 05:39:20.210658 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 27 05:39:20.210758 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 27 05:39:20.210855 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 27 05:39:20.210954 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 27 05:39:20.211050 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 27 05:39:20.211145 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 27 05:39:20.211156 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 27 05:39:20.211681 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 27 05:39:20.211783 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 27 05:39:20.211881 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 27 05:39:20.211982 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 27 05:39:20.212080 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 27 05:39:20.212184 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 27 05:39:20.212283 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 27 05:39:20.212383 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 27 05:39:20.212479 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 27 05:39:20.212605 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 27 05:39:20.212617 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 27 05:39:20.212626 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 27 05:39:20.212635 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 27 05:39:20.212645 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 27 05:39:20.212656 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 27 05:39:20.212664 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 27 05:39:20.212770 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 27 05:39:20.212782 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 27 05:39:20.212871 kernel: rtc_cmos 00:03: registered as rtc0 Jan 27 05:39:20.212962 kernel: rtc_cmos 00:03: setting system clock to 2026-01-27T05:39:18 UTC (1769492358) Jan 27 05:39:20.213055 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 27 05:39:20.213066 kernel: intel_pstate: CPU model not supported Jan 27 05:39:20.213075 kernel: efifb: probing for efifb Jan 27 05:39:20.213084 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 27 05:39:20.213092 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 27 05:39:20.213101 kernel: efifb: scrolling: redraw Jan 27 05:39:20.213110 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 27 05:39:20.213120 kernel: Console: switching to colour frame buffer device 160x50 Jan 27 05:39:20.213129 kernel: fb0: EFI VGA frame buffer device Jan 27 05:39:20.213137 kernel: pstore: Using crash dump compression: deflate Jan 27 05:39:20.213147 kernel: pstore: Registered efi_pstore as persistent store backend Jan 27 05:39:20.213155 kernel: NET: Registered PF_INET6 protocol family Jan 27 05:39:20.213164 kernel: Segment Routing with IPv6 Jan 27 05:39:20.213459 kernel: In-situ OAM (IOAM) with IPv6 Jan 27 05:39:20.213476 kernel: NET: Registered PF_PACKET protocol family Jan 27 05:39:20.213485 kernel: Key type dns_resolver registered Jan 27 05:39:20.213494 kernel: IPI shorthand broadcast: enabled Jan 27 05:39:20.213503 kernel: sched_clock: Marking stable (2424001458, 165488273)->(2891722099, -302232368) Jan 27 05:39:20.213511 kernel: registered taskstats version 1 Jan 27 05:39:20.213520 kernel: Loading compiled-in X.509 certificates Jan 27 05:39:20.213529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 9e3db75de0fafb28d6cceb2e9f9c71b82c500cb9' Jan 27 05:39:20.213538 kernel: Demotion targets for Node 0: null Jan 27 05:39:20.213549 kernel: Key type .fscrypt registered Jan 27 05:39:20.213558 kernel: Key type fscrypt-provisioning registered Jan 27 05:39:20.213567 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 27 05:39:20.213575 kernel: ima: Allocated hash algorithm: sha1 Jan 27 05:39:20.213584 kernel: ima: No architecture policies found Jan 27 05:39:20.213593 kernel: clk: Disabling unused clocks Jan 27 05:39:20.213602 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 27 05:39:20.213612 kernel: Write protecting the kernel read-only data: 47104k Jan 27 05:39:20.213621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 27 05:39:20.213630 kernel: Run /init as init process Jan 27 05:39:20.213638 kernel: with arguments: Jan 27 05:39:20.213648 kernel: /init Jan 27 05:39:20.213656 kernel: with environment: Jan 27 05:39:20.213664 kernel: HOME=/ Jan 27 05:39:20.213674 kernel: TERM=linux Jan 27 05:39:20.213683 kernel: SCSI subsystem initialized Jan 27 05:39:20.213691 kernel: libata version 3.00 loaded. Jan 27 05:39:20.213809 kernel: ahci 0000:00:1f.2: version 3.0 Jan 27 05:39:20.213821 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 27 05:39:20.215214 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 27 05:39:20.215349 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 27 05:39:20.215454 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 27 05:39:20.215570 kernel: scsi host0: ahci Jan 27 05:39:20.215674 kernel: scsi host1: ahci Jan 27 05:39:20.215796 kernel: scsi host2: ahci Jan 27 05:39:20.215896 kernel: scsi host3: ahci Jan 27 05:39:20.215999 kernel: scsi host4: ahci Jan 27 05:39:20.216101 kernel: scsi host5: ahci Jan 27 05:39:20.216113 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 55 lpm-pol 1 Jan 27 05:39:20.216123 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 55 lpm-pol 1 Jan 27 05:39:20.216132 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 55 lpm-pol 1 Jan 27 05:39:20.216141 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 55 lpm-pol 1 Jan 27 05:39:20.216152 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 55 lpm-pol 1 Jan 27 05:39:20.216161 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 55 lpm-pol 1 Jan 27 05:39:20.216170 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217231 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217243 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217255 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217266 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217277 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 27 05:39:20.217290 kernel: ACPI: bus type USB registered Jan 27 05:39:20.217299 kernel: usbcore: registered new interface driver usbfs Jan 27 05:39:20.217308 kernel: usbcore: registered new interface driver hub Jan 27 05:39:20.217317 kernel: usbcore: registered new device driver usb Jan 27 05:39:20.217449 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 27 05:39:20.217556 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 27 05:39:20.217659 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 27 05:39:20.217764 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 27 05:39:20.217893 kernel: hub 1-0:1.0: USB hub found Jan 27 05:39:20.218321 kernel: hub 1-0:1.0: 2 ports detected Jan 27 05:39:20.218434 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 27 05:39:20.218533 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 27 05:39:20.218548 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 27 05:39:20.218558 kernel: GPT:25804799 != 104857599 Jan 27 05:39:20.218567 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 27 05:39:20.218576 kernel: GPT:25804799 != 104857599 Jan 27 05:39:20.218585 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 27 05:39:20.218594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 27 05:39:20.218603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 27 05:39:20.218614 kernel: device-mapper: uevent: version 1.0.3 Jan 27 05:39:20.218623 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 27 05:39:20.218633 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 27 05:39:20.218641 kernel: raid6: avx512x4 gen() 41920 MB/s Jan 27 05:39:20.218652 kernel: raid6: avx512x2 gen() 44526 MB/s Jan 27 05:39:20.218661 kernel: raid6: avx512x1 gen() 44342 MB/s Jan 27 05:39:20.218672 kernel: raid6: avx2x4 gen() 34284 MB/s Jan 27 05:39:20.218681 kernel: raid6: avx2x2 gen() 33820 MB/s Jan 27 05:39:20.218690 kernel: raid6: avx2x1 gen() 30074 MB/s Jan 27 05:39:20.218699 kernel: raid6: using algorithm avx512x2 gen() 44526 MB/s Jan 27 05:39:20.218708 kernel: raid6: .... xor() 26687 MB/s, rmw enabled Jan 27 05:39:20.218719 kernel: raid6: using avx512x2 recovery algorithm Jan 27 05:39:20.218728 kernel: xor: automatically using best checksumming function avx Jan 27 05:39:20.218854 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 27 05:39:20.218868 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 27 05:39:20.218877 kernel: BTRFS: device fsid 8e29e710-4356-4007-b707-6ae7cc95ead5 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (203) Jan 27 05:39:20.218887 kernel: BTRFS info (device dm-0): first mount of filesystem 8e29e710-4356-4007-b707-6ae7cc95ead5 Jan 27 05:39:20.218896 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:39:20.218905 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 27 05:39:20.218916 kernel: BTRFS info (device dm-0): enabling free space tree Jan 27 05:39:20.218925 kernel: loop: module loaded Jan 27 05:39:20.218934 kernel: loop0: detected capacity change from 0 to 100552 Jan 27 05:39:20.218943 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 27 05:39:20.218954 systemd[1]: Successfully made /usr/ read-only. Jan 27 05:39:20.218966 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 05:39:20.218978 systemd[1]: Detected virtualization kvm. Jan 27 05:39:20.218987 systemd[1]: Detected architecture x86-64. Jan 27 05:39:20.218996 systemd[1]: Running in initrd. Jan 27 05:39:20.219005 systemd[1]: No hostname configured, using default hostname. Jan 27 05:39:20.219015 systemd[1]: Hostname set to . Jan 27 05:39:20.219024 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 05:39:20.219033 systemd[1]: Queued start job for default target initrd.target. Jan 27 05:39:20.219044 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 05:39:20.219053 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:39:20.219063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:39:20.219073 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 27 05:39:20.219082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 05:39:20.219094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 27 05:39:20.219104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 27 05:39:20.219113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:39:20.219122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:39:20.219131 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 27 05:39:20.219141 systemd[1]: Reached target paths.target - Path Units. Jan 27 05:39:20.219150 systemd[1]: Reached target slices.target - Slice Units. Jan 27 05:39:20.219161 systemd[1]: Reached target swap.target - Swaps. Jan 27 05:39:20.219170 systemd[1]: Reached target timers.target - Timer Units. Jan 27 05:39:20.219713 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 05:39:20.219723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 05:39:20.219733 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:39:20.219743 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 27 05:39:20.219752 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 27 05:39:20.219764 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:39:20.219773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 05:39:20.219783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:39:20.219792 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 05:39:20.219802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 27 05:39:20.219811 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 27 05:39:20.219823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 05:39:20.219832 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 27 05:39:20.219842 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 27 05:39:20.219852 systemd[1]: Starting systemd-fsck-usr.service... Jan 27 05:39:20.219861 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 05:39:20.219870 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 05:39:20.219882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:20.219892 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 27 05:39:20.219903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:39:20.219912 systemd[1]: Finished systemd-fsck-usr.service. Jan 27 05:39:20.219922 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 27 05:39:20.219959 systemd-journald[340]: Collecting audit messages is enabled. Jan 27 05:39:20.219987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:20.219997 kernel: audit: type=1130 audit(1769492360.160:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.220009 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 05:39:20.220019 kernel: audit: type=1130 audit(1769492360.167:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.220028 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 27 05:39:20.220038 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 27 05:39:20.220047 kernel: Bridge firewalling registered Jan 27 05:39:20.220056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 05:39:20.220068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 05:39:20.220077 kernel: audit: type=1130 audit(1769492360.197:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.220087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:39:20.220096 kernel: audit: type=1130 audit(1769492360.203:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.220105 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 05:39:20.220114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 05:39:20.220127 systemd-journald[340]: Journal started Jan 27 05:39:20.220148 systemd-journald[340]: Runtime Journal (/run/log/journal/fb532b848b6140a88cce0cf686cb4352) is 8M, max 77.9M, 69.9M free. Jan 27 05:39:20.220809 kernel: audit: type=1130 audit(1769492360.220:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.183246 systemd-modules-load[342]: Inserted module 'br_netfilter' Jan 27 05:39:20.229190 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 27 05:39:20.232216 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 05:39:20.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.237882 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:39:20.242507 kernel: audit: type=1130 audit(1769492360.234:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.242531 kernel: audit: type=1130 audit(1769492360.238:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.242000 audit: BPF prog-id=6 op=LOAD Jan 27 05:39:20.244210 kernel: audit: type=1334 audit(1769492360.242:9): prog-id=6 op=LOAD Jan 27 05:39:20.245765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 05:39:20.249365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 05:39:20.256792 dracut-cmdline[366]: dracut-109 Jan 27 05:39:20.262554 dracut-cmdline[366]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=94a0aed2c135ea3629cf7bc829842658bafc4ce682f9974c582239b9a4f2cb9e Jan 27 05:39:20.270933 systemd-tmpfiles[377]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 27 05:39:20.276654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:39:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.283216 kernel: audit: type=1130 audit(1769492360.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.315159 systemd-resolved[376]: Positive Trust Anchors: Jan 27 05:39:20.315186 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 05:39:20.315190 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 05:39:20.315221 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 05:39:20.341407 systemd-resolved[376]: Defaulting to hostname 'linux'. Jan 27 05:39:20.342778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 05:39:20.344015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:39:20.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.392206 kernel: Loading iSCSI transport class v2.0-870. Jan 27 05:39:20.409222 kernel: iscsi: registered transport (tcp) Jan 27 05:39:20.434631 kernel: iscsi: registered transport (qla4xxx) Jan 27 05:39:20.434726 kernel: QLogic iSCSI HBA Driver Jan 27 05:39:20.462404 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 05:39:20.486456 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:39:20.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.489947 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 05:39:20.535718 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 27 05:39:20.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.538209 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 27 05:39:20.539419 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 27 05:39:20.572709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 27 05:39:20.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.574000 audit: BPF prog-id=7 op=LOAD Jan 27 05:39:20.575000 audit: BPF prog-id=8 op=LOAD Jan 27 05:39:20.575718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:39:20.603050 systemd-udevd[614]: Using default interface naming scheme 'v257'. Jan 27 05:39:20.613055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:39:20.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.616335 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 27 05:39:20.642168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 05:39:20.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.644000 audit: BPF prog-id=9 op=LOAD Jan 27 05:39:20.645170 dracut-pre-trigger[691]: rd.md=0: removing MD RAID activation Jan 27 05:39:20.645943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 05:39:20.681158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 05:39:20.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.684491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 05:39:20.694190 systemd-networkd[722]: lo: Link UP Jan 27 05:39:20.694197 systemd-networkd[722]: lo: Gained carrier Jan 27 05:39:20.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.696929 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 05:39:20.697529 systemd[1]: Reached target network.target - Network. Jan 27 05:39:20.777959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:39:20.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.779924 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 27 05:39:20.921220 kernel: cryptd: max_cpu_qlen set to 1000 Jan 27 05:39:20.922588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 27 05:39:20.952921 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 27 05:39:20.960386 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 27 05:39:20.963193 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 27 05:39:20.969900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 27 05:39:20.981940 kernel: AES CTR mode by8 optimization enabled Jan 27 05:39:20.982002 kernel: usbcore: registered new interface driver usbhid Jan 27 05:39:20.982015 kernel: usbhid: USB HID core driver Jan 27 05:39:20.983215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 05:39:20.992702 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 27 05:39:20.992752 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 27 05:39:20.994463 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 27 05:39:20.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:20.994951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:39:20.995107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:20.997318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:20.998964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:21.007784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:39:21.008210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:21.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:21.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:21.019497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:21.035556 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:39:21.035561 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 05:39:21.038214 systemd-networkd[722]: eth0: Link UP Jan 27 05:39:21.038375 systemd-networkd[722]: eth0: Gained carrier Jan 27 05:39:21.038386 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:39:21.046280 disk-uuid[880]: Primary Header is updated. Jan 27 05:39:21.046280 disk-uuid[880]: Secondary Entries is updated. Jan 27 05:39:21.046280 disk-uuid[880]: Secondary Header is updated. Jan 27 05:39:21.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:21.052213 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 27 05:39:21.054705 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 05:39:21.057046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:39:21.058827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 05:39:21.066270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 27 05:39:21.078988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:21.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:21.107290 systemd-networkd[722]: eth0: DHCPv4 address 10.0.2.139/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 27 05:39:21.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:21.124964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 27 05:39:22.074445 systemd-networkd[722]: eth0: Gained IPv6LL Jan 27 05:39:22.121624 disk-uuid[885]: Warning: The kernel is still using the old partition table. Jan 27 05:39:22.121624 disk-uuid[885]: The new table will be used at the next reboot or after you Jan 27 05:39:22.121624 disk-uuid[885]: run partprobe(8) or kpartx(8) Jan 27 05:39:22.121624 disk-uuid[885]: The operation has completed successfully. Jan 27 05:39:22.132225 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 27 05:39:22.132394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 27 05:39:22.143760 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 27 05:39:22.143792 kernel: audit: type=1130 audit(1769492362.133:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.143818 kernel: audit: type=1131 audit(1769492362.133:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.136971 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 27 05:39:22.196207 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (908) Jan 27 05:39:22.200804 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:39:22.200880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:39:22.210273 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:39:22.210340 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:39:22.219588 kernel: BTRFS info (device vda6): last unmount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:39:22.220155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 27 05:39:22.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.224324 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 27 05:39:22.225799 kernel: audit: type=1130 audit(1769492362.221:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.498062 ignition[927]: Ignition 2.24.0 Jan 27 05:39:22.499300 ignition[927]: Stage: fetch-offline Jan 27 05:39:22.499370 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:22.501263 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 05:39:22.506267 kernel: audit: type=1130 audit(1769492362.501:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:22.499385 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:22.503868 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 27 05:39:22.499485 ignition[927]: parsed url from cmdline: "" Jan 27 05:39:22.499490 ignition[927]: no config URL provided Jan 27 05:39:22.499496 ignition[927]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 05:39:22.499506 ignition[927]: no config at "/usr/lib/ignition/user.ign" Jan 27 05:39:22.499513 ignition[927]: failed to fetch config: resource requires networking Jan 27 05:39:22.499713 ignition[927]: Ignition finished successfully Jan 27 05:39:22.530819 ignition[934]: Ignition 2.24.0 Jan 27 05:39:22.530831 ignition[934]: Stage: fetch Jan 27 05:39:22.530964 ignition[934]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:22.530974 ignition[934]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:22.531047 ignition[934]: parsed url from cmdline: "" Jan 27 05:39:22.531051 ignition[934]: no config URL provided Jan 27 05:39:22.531055 ignition[934]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 05:39:22.531061 ignition[934]: no config at "/usr/lib/ignition/user.ign" Jan 27 05:39:22.531143 ignition[934]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 27 05:39:22.531734 ignition[934]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 27 05:39:22.533208 ignition[934]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 27 05:39:23.355577 ignition[934]: GET result: OK Jan 27 05:39:23.355818 ignition[934]: parsing config with SHA512: 3684a70e85cbdcdbcbbe7879a2d5fa961b565cde523783a018b78c4df020cd05444404d13b73d02dc9c6d74086d06a3579409491c71181bb9b5275a3775c9a4c Jan 27 05:39:23.370269 unknown[934]: fetched base config from "system" Jan 27 05:39:23.370286 unknown[934]: fetched base config from "system" Jan 27 05:39:23.370831 ignition[934]: fetch: fetch complete Jan 27 05:39:23.370295 unknown[934]: fetched user config from "openstack" Jan 27 05:39:23.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.370839 ignition[934]: fetch: fetch passed Jan 27 05:39:23.392137 kernel: audit: type=1130 audit(1769492363.379:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.378901 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 27 05:39:23.370937 ignition[934]: Ignition finished successfully Jan 27 05:39:23.384865 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 27 05:39:23.429863 ignition[940]: Ignition 2.24.0 Jan 27 05:39:23.431115 ignition[940]: Stage: kargs Jan 27 05:39:23.432115 ignition[940]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:23.432131 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:23.433572 ignition[940]: kargs: kargs passed Jan 27 05:39:23.433704 ignition[940]: Ignition finished successfully Jan 27 05:39:23.434978 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 27 05:39:23.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.441218 kernel: audit: type=1130 audit(1769492363.435:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.441220 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 27 05:39:23.474963 ignition[946]: Ignition 2.24.0 Jan 27 05:39:23.474981 ignition[946]: Stage: disks Jan 27 05:39:23.475255 ignition[946]: no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:23.475269 ignition[946]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:23.476433 ignition[946]: disks: disks passed Jan 27 05:39:23.476491 ignition[946]: Ignition finished successfully Jan 27 05:39:23.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.478601 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 27 05:39:23.485309 kernel: audit: type=1130 audit(1769492363.479:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.480035 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 27 05:39:23.484834 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 27 05:39:23.486326 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 05:39:23.487031 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 05:39:23.488246 systemd[1]: Reached target basic.target - Basic System. Jan 27 05:39:23.504327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 27 05:39:23.576375 systemd-fsck[955]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 27 05:39:23.580037 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 27 05:39:23.585355 kernel: audit: type=1130 audit(1769492363.580:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:23.581844 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 27 05:39:23.749212 kernel: EXT4-fs (vda9): mounted filesystem a9099a9f-29a1-43d8-a05a-53a191872646 r/w with ordered data mode. Quota mode: none. Jan 27 05:39:23.749236 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 27 05:39:23.750319 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 27 05:39:23.753787 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 05:39:23.755455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 27 05:39:23.757453 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 27 05:39:23.766298 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 27 05:39:23.767412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 27 05:39:23.769399 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 05:39:23.772495 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 27 05:39:23.775266 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 27 05:39:23.788863 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Jan 27 05:39:23.794184 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:39:23.794238 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:39:23.801758 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:39:23.801795 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:39:23.803893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 05:39:23.874195 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:24.015353 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 27 05:39:24.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:24.021762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 27 05:39:24.022313 kernel: audit: type=1130 audit(1769492364.016:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:24.032314 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 27 05:39:24.042089 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 27 05:39:24.044338 kernel: BTRFS info (device vda6): last unmount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:39:24.074528 ignition[1063]: INFO : Ignition 2.24.0 Jan 27 05:39:24.076194 ignition[1063]: INFO : Stage: mount Jan 27 05:39:24.076194 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:24.076194 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:24.077987 ignition[1063]: INFO : mount: mount passed Jan 27 05:39:24.077987 ignition[1063]: INFO : Ignition finished successfully Jan 27 05:39:24.080787 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 27 05:39:24.085504 kernel: audit: type=1130 audit(1769492364.081:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:24.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:24.086862 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 27 05:39:24.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:24.923216 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:26.936252 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:30.952202 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:30.960806 coreos-metadata[965]: Jan 27 05:39:30.960 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:39:30.986135 coreos-metadata[965]: Jan 27 05:39:30.986 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 05:39:31.581641 coreos-metadata[965]: Jan 27 05:39:31.581 INFO Fetch successful Jan 27 05:39:31.584264 coreos-metadata[965]: Jan 27 05:39:31.583 INFO wrote hostname ci-4592-0-0-n-5ca0d578df to /sysroot/etc/hostname Jan 27 05:39:31.587997 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 27 05:39:31.615096 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:39:31.615160 kernel: audit: type=1130 audit(1769492371.588:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:31.615222 kernel: audit: type=1131 audit(1769492371.588:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:31.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:31.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:31.588105 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 27 05:39:31.590271 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 27 05:39:31.632786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 05:39:31.675210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1080) Jan 27 05:39:31.678375 kernel: BTRFS info (device vda6): first mount of filesystem 3d9bae75-48f1-4a66-8ef3-32c49c69a6d1 Jan 27 05:39:31.678412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 05:39:31.687643 kernel: BTRFS info (device vda6): turning on async discard Jan 27 05:39:31.687689 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 05:39:31.690298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 05:39:31.726991 ignition[1098]: INFO : Ignition 2.24.0 Jan 27 05:39:31.726991 ignition[1098]: INFO : Stage: files Jan 27 05:39:31.728454 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:31.728454 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:31.728454 ignition[1098]: DEBUG : files: compiled without relabeling support, skipping Jan 27 05:39:31.729842 ignition[1098]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 27 05:39:31.729842 ignition[1098]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 27 05:39:31.735871 ignition[1098]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 27 05:39:31.737121 ignition[1098]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 27 05:39:31.737121 ignition[1098]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 27 05:39:31.737008 unknown[1098]: wrote ssh authorized keys file for user: core Jan 27 05:39:31.742192 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 27 05:39:31.742192 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 27 05:39:31.806725 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 27 05:39:31.935926 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 27 05:39:31.935926 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 05:39:31.937807 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 05:39:31.940804 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 05:39:31.940804 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 05:39:31.940804 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 05:39:31.942415 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 05:39:31.942415 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 05:39:31.942415 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 27 05:39:32.199821 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 27 05:39:32.808717 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 27 05:39:32.810008 ignition[1098]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 27 05:39:32.811748 ignition[1098]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 05:39:32.814830 ignition[1098]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 05:39:32.814830 ignition[1098]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 27 05:39:32.814830 ignition[1098]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 27 05:39:32.823499 kernel: audit: type=1130 audit(1769492372.817:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.823577 ignition[1098]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 27 05:39:32.823577 ignition[1098]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 27 05:39:32.823577 ignition[1098]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 27 05:39:32.823577 ignition[1098]: INFO : files: files passed Jan 27 05:39:32.823577 ignition[1098]: INFO : Ignition finished successfully Jan 27 05:39:32.816618 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 27 05:39:32.818917 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 27 05:39:32.829888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 27 05:39:32.832402 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 27 05:39:32.832522 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 27 05:39:32.842418 kernel: audit: type=1130 audit(1769492372.834:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.842446 kernel: audit: type=1131 audit(1769492372.834:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.853211 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:39:32.854782 initrd-setup-root-after-ignition[1130]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:39:32.855915 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 05:39:32.858056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 05:39:32.863132 kernel: audit: type=1130 audit(1769492372.858:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.858782 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 27 05:39:32.864487 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 27 05:39:32.906393 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 27 05:39:32.906526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 27 05:39:32.915715 kernel: audit: type=1130 audit(1769492372.907:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.915745 kernel: audit: type=1131 audit(1769492372.907:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.908095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 27 05:39:32.916304 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 27 05:39:32.917821 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 27 05:39:32.918818 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 27 05:39:32.956114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 05:39:32.961084 kernel: audit: type=1130 audit(1769492372.956:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.959317 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 27 05:39:32.979009 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 05:39:32.979912 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:39:32.981132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:39:32.982201 systemd[1]: Stopped target timers.target - Timer Units. Jan 27 05:39:32.983233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 27 05:39:32.988322 kernel: audit: type=1131 audit(1769492372.983:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:32.983347 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 05:39:32.984010 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 27 05:39:32.989385 systemd[1]: Stopped target basic.target - Basic System. Jan 27 05:39:32.990020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 27 05:39:32.991161 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 05:39:32.992261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 27 05:39:32.993421 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 27 05:39:32.994575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 27 05:39:32.995654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 05:39:32.996786 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 27 05:39:32.997890 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 27 05:39:32.998998 systemd[1]: Stopped target swap.target - Swaps. Jan 27 05:39:33.000089 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 27 05:39:33.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.000263 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 27 05:39:33.001752 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:39:33.002859 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:39:33.003820 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 27 05:39:33.003929 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:39:33.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.004892 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 27 05:39:33.005006 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 27 05:39:33.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.006507 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 27 05:39:33.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.006619 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 05:39:33.007598 systemd[1]: ignition-files.service: Deactivated successfully. Jan 27 05:39:33.007697 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 27 05:39:33.010272 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 27 05:39:33.010876 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 27 05:39:33.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.011002 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:39:33.014118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 27 05:39:33.017511 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 27 05:39:33.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.017619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:39:33.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.018663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 27 05:39:33.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.018770 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:39:33.019682 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 27 05:39:33.019784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 05:39:33.032005 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 27 05:39:33.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.032086 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 27 05:39:33.037277 ignition[1154]: INFO : Ignition 2.24.0 Jan 27 05:39:33.037277 ignition[1154]: INFO : Stage: umount Jan 27 05:39:33.038378 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 05:39:33.038378 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 05:39:33.038378 ignition[1154]: INFO : umount: umount passed Jan 27 05:39:33.038378 ignition[1154]: INFO : Ignition finished successfully Jan 27 05:39:33.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.039787 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 27 05:39:33.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.040277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 27 05:39:33.041244 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 27 05:39:33.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.041325 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 27 05:39:33.041740 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 27 05:39:33.041776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 27 05:39:33.042140 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 27 05:39:33.042225 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 27 05:39:33.042634 systemd[1]: Stopped target network.target - Network. Jan 27 05:39:33.043550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 27 05:39:33.043590 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 05:39:33.044543 systemd[1]: Stopped target paths.target - Path Units. Jan 27 05:39:33.045522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 27 05:39:33.049235 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:39:33.049768 systemd[1]: Stopped target slices.target - Slice Units. Jan 27 05:39:33.050628 systemd[1]: Stopped target sockets.target - Socket Units. Jan 27 05:39:33.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.051085 systemd[1]: iscsid.socket: Deactivated successfully. Jan 27 05:39:33.051128 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 05:39:33.051983 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 27 05:39:33.052010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 05:39:33.053143 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 27 05:39:33.053169 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:39:33.054051 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 27 05:39:33.054099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 27 05:39:33.055279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 27 05:39:33.055318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 27 05:39:33.055897 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 27 05:39:33.056336 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 27 05:39:33.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.058790 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 27 05:39:33.061136 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 27 05:39:33.061315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 27 05:39:33.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.062275 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 27 05:39:33.062360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 27 05:39:33.064928 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 27 05:39:33.065039 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 27 05:39:33.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.067317 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 27 05:39:33.067832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 27 05:39:33.069000 audit: BPF prog-id=9 op=UNLOAD Jan 27 05:39:33.067868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:39:33.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.071407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 27 05:39:33.071852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 27 05:39:33.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.071904 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 05:39:33.072535 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:39:33.073198 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 27 05:39:33.073299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 27 05:39:33.077000 audit: BPF prog-id=6 op=UNLOAD Jan 27 05:39:33.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.075373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 27 05:39:33.075433 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:39:33.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.077688 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 27 05:39:33.077736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 27 05:39:33.087458 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 27 05:39:33.087830 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:39:33.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.091037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 27 05:39:33.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.091085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 27 05:39:33.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.091511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 27 05:39:33.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.091535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:39:33.091890 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 27 05:39:33.091928 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 27 05:39:33.092809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 27 05:39:33.092847 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 27 05:39:33.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.093839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 27 05:39:33.093877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 05:39:33.096284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 27 05:39:33.098393 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 27 05:39:33.098509 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:39:33.099097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 27 05:39:33.099136 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:39:33.099655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:39:33.099691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:33.108901 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 27 05:39:33.109012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 27 05:39:33.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.114435 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 27 05:39:33.114543 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 27 05:39:33.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:33.115679 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 27 05:39:33.117257 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 27 05:39:33.132790 systemd[1]: Switching root. Jan 27 05:39:33.158351 systemd-journald[340]: Journal stopped Jan 27 05:39:34.351413 systemd-journald[340]: Received SIGTERM from PID 1 (systemd). Jan 27 05:39:34.351495 kernel: SELinux: policy capability network_peer_controls=1 Jan 27 05:39:34.351517 kernel: SELinux: policy capability open_perms=1 Jan 27 05:39:34.351529 kernel: SELinux: policy capability extended_socket_class=1 Jan 27 05:39:34.351544 kernel: SELinux: policy capability always_check_network=0 Jan 27 05:39:34.351558 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 27 05:39:34.351569 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 27 05:39:34.351580 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 27 05:39:34.351591 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 27 05:39:34.351603 kernel: SELinux: policy capability userspace_initial_context=0 Jan 27 05:39:34.351614 systemd[1]: Successfully loaded SELinux policy in 76.408ms. Jan 27 05:39:34.351638 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.891ms. Jan 27 05:39:34.351651 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 05:39:34.351663 systemd[1]: Detected virtualization kvm. Jan 27 05:39:34.351674 systemd[1]: Detected architecture x86-64. Jan 27 05:39:34.351689 systemd[1]: Detected first boot. Jan 27 05:39:34.351701 systemd[1]: Hostname set to . Jan 27 05:39:34.351716 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 05:39:34.351730 zram_generator::config[1197]: No configuration found. Jan 27 05:39:34.351750 kernel: Guest personality initialized and is inactive Jan 27 05:39:34.351767 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 27 05:39:34.351778 kernel: Initialized host personality Jan 27 05:39:34.351788 kernel: NET: Registered PF_VSOCK protocol family Jan 27 05:39:34.351800 systemd[1]: Populated /etc with preset unit settings. Jan 27 05:39:34.351811 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 27 05:39:34.351828 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 27 05:39:34.351840 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 27 05:39:34.351857 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 27 05:39:34.351868 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 27 05:39:34.351880 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 27 05:39:34.351892 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 27 05:39:34.351905 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 27 05:39:34.351917 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 27 05:39:34.351929 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 27 05:39:34.351941 systemd[1]: Created slice user.slice - User and Session Slice. Jan 27 05:39:34.351953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 05:39:34.351966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 05:39:34.351978 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 27 05:39:34.351990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 27 05:39:34.352003 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 27 05:39:34.352016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 05:39:34.352028 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 27 05:39:34.352042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 05:39:34.352055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 05:39:34.352066 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 27 05:39:34.352077 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 27 05:39:34.352089 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 27 05:39:34.352100 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 27 05:39:34.352111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 05:39:34.352122 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 05:39:34.352134 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 27 05:39:34.352151 systemd[1]: Reached target slices.target - Slice Units. Jan 27 05:39:34.352162 systemd[1]: Reached target swap.target - Swaps. Jan 27 05:39:34.352185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 27 05:39:34.352197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 27 05:39:34.352208 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 27 05:39:34.352220 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 05:39:34.352232 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 27 05:39:34.352245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 05:39:34.352257 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 27 05:39:34.352268 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 27 05:39:34.352279 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 05:39:34.352290 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 05:39:34.352302 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 27 05:39:34.352313 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 27 05:39:34.352325 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 27 05:39:34.352337 systemd[1]: Mounting media.mount - External Media Directory... Jan 27 05:39:34.352350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:34.352362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 27 05:39:34.352373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 27 05:39:34.352384 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 27 05:39:34.352397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 27 05:39:34.352424 systemd[1]: Reached target machines.target - Containers. Jan 27 05:39:34.352436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 27 05:39:34.352448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:39:34.352459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 05:39:34.352470 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 27 05:39:34.352482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:39:34.352496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 05:39:34.352507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 05:39:34.352519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 27 05:39:34.352530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 05:39:34.352543 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 27 05:39:34.352555 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 27 05:39:34.352567 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 27 05:39:34.352578 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 27 05:39:34.352589 systemd[1]: Stopped systemd-fsck-usr.service. Jan 27 05:39:34.352601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:39:34.352615 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 05:39:34.352627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 05:39:34.352638 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 05:39:34.352651 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 27 05:39:34.352662 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 27 05:39:34.352673 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 05:39:34.352684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:34.352699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 27 05:39:34.352713 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 27 05:39:34.352726 systemd[1]: Mounted media.mount - External Media Directory. Jan 27 05:39:34.352738 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 27 05:39:34.352749 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 27 05:39:34.352763 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 27 05:39:34.352774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 05:39:34.352786 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 27 05:39:34.352798 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 27 05:39:34.352810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:39:34.352821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:39:34.352832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 05:39:34.352845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 05:39:34.352856 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 05:39:34.352867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 05:39:34.352878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 05:39:34.352912 systemd-journald[1273]: Collecting audit messages is enabled. Jan 27 05:39:34.352937 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 05:39:34.352952 systemd-journald[1273]: Journal started Jan 27 05:39:34.352975 systemd-journald[1273]: Runtime Journal (/run/log/journal/fb532b848b6140a88cce0cf686cb4352) is 8M, max 77.9M, 69.9M free. Jan 27 05:39:34.134000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 27 05:39:34.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.264000 audit: BPF prog-id=14 op=UNLOAD Jan 27 05:39:34.264000 audit: BPF prog-id=13 op=UNLOAD Jan 27 05:39:34.356259 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 05:39:34.265000 audit: BPF prog-id=15 op=LOAD Jan 27 05:39:34.265000 audit: BPF prog-id=16 op=LOAD Jan 27 05:39:34.265000 audit: BPF prog-id=17 op=LOAD Jan 27 05:39:34.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.347000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 27 05:39:34.347000 audit[1273]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffec2aba870 a2=4000 a3=0 items=0 ppid=1 pid=1273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:34.347000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 27 05:39:34.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.047231 systemd[1]: Queued start job for default target multi-user.target. Jan 27 05:39:34.072561 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 27 05:39:34.073077 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 27 05:39:34.355827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 27 05:39:34.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.369893 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 05:39:34.375147 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 27 05:39:34.377347 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 27 05:39:34.380245 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 27 05:39:34.380278 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 05:39:34.381578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 27 05:39:34.382148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:39:34.382261 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:39:34.390199 kernel: fuse: init (API version 7.41) Jan 27 05:39:34.390862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 27 05:39:34.394363 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 27 05:39:34.395269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 05:39:34.401091 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 27 05:39:34.402273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 05:39:34.405161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 05:39:34.417218 kernel: ACPI: bus type drm_connector registered Jan 27 05:39:34.419654 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 27 05:39:34.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.422375 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 27 05:39:34.423117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 27 05:39:34.424462 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 27 05:39:34.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.425337 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 27 05:39:34.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.427704 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 27 05:39:34.437514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 27 05:39:34.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.438249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 05:39:34.439231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 05:39:34.445996 systemd-journald[1273]: Time spent on flushing to /var/log/journal/fb532b848b6140a88cce0cf686cb4352 is 68.118ms for 1842 entries. Jan 27 05:39:34.445996 systemd-journald[1273]: System Journal (/var/log/journal/fb532b848b6140a88cce0cf686cb4352) is 8M, max 588.1M, 580.1M free. Jan 27 05:39:34.533239 systemd-journald[1273]: Received client request to flush runtime journal. Jan 27 05:39:34.533283 kernel: loop1: detected capacity change from 0 to 50784 Jan 27 05:39:34.533303 kernel: loop2: detected capacity change from 0 to 219144 Jan 27 05:39:34.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.456688 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 27 05:39:34.458009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 27 05:39:34.461459 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 27 05:39:34.466360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 05:39:34.518968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 05:39:34.535630 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 27 05:39:34.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.546324 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 27 05:39:34.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.550000 audit: BPF prog-id=18 op=LOAD Jan 27 05:39:34.550000 audit: BPF prog-id=19 op=LOAD Jan 27 05:39:34.550000 audit: BPF prog-id=20 op=LOAD Jan 27 05:39:34.551354 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 27 05:39:34.552000 audit: BPF prog-id=21 op=LOAD Jan 27 05:39:34.554767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 05:39:34.560328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 05:39:34.561578 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 27 05:39:34.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.566000 audit: BPF prog-id=22 op=LOAD Jan 27 05:39:34.566000 audit: BPF prog-id=23 op=LOAD Jan 27 05:39:34.566000 audit: BPF prog-id=24 op=LOAD Jan 27 05:39:34.567242 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 27 05:39:34.568000 audit: BPF prog-id=25 op=LOAD Jan 27 05:39:34.568000 audit: BPF prog-id=26 op=LOAD Jan 27 05:39:34.569000 audit: BPF prog-id=27 op=LOAD Jan 27 05:39:34.570065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 27 05:39:34.600395 kernel: loop3: detected capacity change from 0 to 111560 Jan 27 05:39:34.622958 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 27 05:39:34.622973 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jan 27 05:39:34.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.634275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 05:39:34.648876 systemd-nsresourced[1343]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 27 05:39:34.651098 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 27 05:39:34.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.654240 kernel: loop4: detected capacity change from 0 to 1656 Jan 27 05:39:34.657637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 27 05:39:34.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.679198 kernel: loop5: detected capacity change from 0 to 50784 Jan 27 05:39:34.714196 kernel: loop6: detected capacity change from 0 to 219144 Jan 27 05:39:34.750196 kernel: loop7: detected capacity change from 0 to 111560 Jan 27 05:39:34.762379 systemd-oomd[1338]: No swap; memory pressure usage will be degraded Jan 27 05:39:34.763227 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 27 05:39:34.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.765505 systemd-resolved[1339]: Positive Trust Anchors: Jan 27 05:39:34.765712 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 05:39:34.765750 systemd-resolved[1339]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 05:39:34.765811 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 05:39:34.787257 systemd-resolved[1339]: Using system hostname 'ci-4592-0-0-n-5ca0d578df'. Jan 27 05:39:34.790072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 05:39:34.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:34.791530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 05:39:34.792259 kernel: loop1: detected capacity change from 0 to 1656 Jan 27 05:39:34.801929 (sd-merge)[1360]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 27 05:39:34.805698 (sd-merge)[1360]: Merged extensions into '/usr'. Jan 27 05:39:34.810913 systemd[1]: Reload requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Jan 27 05:39:34.810931 systemd[1]: Reloading... Jan 27 05:39:34.893302 zram_generator::config[1394]: No configuration found. Jan 27 05:39:35.068501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 27 05:39:35.068667 systemd[1]: Reloading finished in 257 ms. Jan 27 05:39:35.102621 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 27 05:39:35.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.103474 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 27 05:39:35.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.108384 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 27 05:39:35.134464 systemd[1]: Starting ensure-sysext.service... Jan 27 05:39:35.138000 audit: BPF prog-id=8 op=UNLOAD Jan 27 05:39:35.138000 audit: BPF prog-id=7 op=UNLOAD Jan 27 05:39:35.138520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 05:39:35.140000 audit: BPF prog-id=28 op=LOAD Jan 27 05:39:35.140000 audit: BPF prog-id=29 op=LOAD Jan 27 05:39:35.141440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 05:39:35.147000 audit: BPF prog-id=30 op=LOAD Jan 27 05:39:35.147000 audit: BPF prog-id=15 op=UNLOAD Jan 27 05:39:35.148000 audit: BPF prog-id=31 op=LOAD Jan 27 05:39:35.148000 audit: BPF prog-id=32 op=LOAD Jan 27 05:39:35.148000 audit: BPF prog-id=16 op=UNLOAD Jan 27 05:39:35.148000 audit: BPF prog-id=17 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=33 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=25 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=34 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=35 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=26 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=27 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=36 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=22 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=37 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=38 op=LOAD Jan 27 05:39:35.150000 audit: BPF prog-id=23 op=UNLOAD Jan 27 05:39:35.150000 audit: BPF prog-id=24 op=UNLOAD Jan 27 05:39:35.151000 audit: BPF prog-id=39 op=LOAD Jan 27 05:39:35.151000 audit: BPF prog-id=18 op=UNLOAD Jan 27 05:39:35.151000 audit: BPF prog-id=40 op=LOAD Jan 27 05:39:35.151000 audit: BPF prog-id=41 op=LOAD Jan 27 05:39:35.151000 audit: BPF prog-id=19 op=UNLOAD Jan 27 05:39:35.151000 audit: BPF prog-id=20 op=UNLOAD Jan 27 05:39:35.154000 audit: BPF prog-id=42 op=LOAD Jan 27 05:39:35.154000 audit: BPF prog-id=21 op=UNLOAD Jan 27 05:39:35.156380 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 27 05:39:35.162112 systemd[1]: Reload requested from client PID 1438 ('systemctl') (unit ensure-sysext.service)... Jan 27 05:39:35.162128 systemd[1]: Reloading... Jan 27 05:39:35.178358 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 27 05:39:35.179382 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 27 05:39:35.181094 systemd-tmpfiles[1439]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 27 05:39:35.183813 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 27 05:39:35.186268 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jan 27 05:39:35.197991 systemd-tmpfiles[1439]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 05:39:35.199321 systemd-tmpfiles[1439]: Skipping /boot Jan 27 05:39:35.202141 systemd-udevd[1440]: Using default interface naming scheme 'v257'. Jan 27 05:39:35.220515 systemd-tmpfiles[1439]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 05:39:35.221054 systemd-tmpfiles[1439]: Skipping /boot Jan 27 05:39:35.240230 zram_generator::config[1472]: No configuration found. Jan 27 05:39:35.372236 kernel: mousedev: PS/2 mouse device common for all mice Jan 27 05:39:35.419195 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 27 05:39:35.425199 kernel: ACPI: button: Power Button [PWRF] Jan 27 05:39:35.517900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 05:39:35.519117 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 27 05:39:35.519204 systemd[1]: Reloading finished in 356 ms. Jan 27 05:39:35.527981 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 27 05:39:35.528267 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 27 05:39:35.528056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 05:39:35.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.530231 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 27 05:39:35.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.535000 audit: BPF prog-id=43 op=LOAD Jan 27 05:39:35.535000 audit: BPF prog-id=44 op=LOAD Jan 27 05:39:35.535000 audit: BPF prog-id=28 op=UNLOAD Jan 27 05:39:35.535000 audit: BPF prog-id=29 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=45 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=39 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=46 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=47 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=40 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=41 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=48 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=33 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=49 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=50 op=LOAD Jan 27 05:39:35.536000 audit: BPF prog-id=34 op=UNLOAD Jan 27 05:39:35.536000 audit: BPF prog-id=35 op=UNLOAD Jan 27 05:39:35.537000 audit: BPF prog-id=51 op=LOAD Jan 27 05:39:35.537000 audit: BPF prog-id=30 op=UNLOAD Jan 27 05:39:35.538000 audit: BPF prog-id=52 op=LOAD Jan 27 05:39:35.530496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 05:39:35.539000 audit: BPF prog-id=53 op=LOAD Jan 27 05:39:35.539000 audit: BPF prog-id=31 op=UNLOAD Jan 27 05:39:35.539000 audit: BPF prog-id=32 op=UNLOAD Jan 27 05:39:35.540000 audit: BPF prog-id=54 op=LOAD Jan 27 05:39:35.540000 audit: BPF prog-id=42 op=UNLOAD Jan 27 05:39:35.540000 audit: BPF prog-id=55 op=LOAD Jan 27 05:39:35.541000 audit: BPF prog-id=36 op=UNLOAD Jan 27 05:39:35.541000 audit: BPF prog-id=56 op=LOAD Jan 27 05:39:35.541000 audit: BPF prog-id=57 op=LOAD Jan 27 05:39:35.541000 audit: BPF prog-id=37 op=UNLOAD Jan 27 05:39:35.541000 audit: BPF prog-id=38 op=UNLOAD Jan 27 05:39:35.576935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.579565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 05:39:35.582557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 27 05:39:35.583156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:39:35.584248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:39:35.589435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 05:39:35.597873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 05:39:35.598511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:39:35.598687 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:39:35.605362 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 27 05:39:35.610530 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 27 05:39:35.611015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:39:35.619462 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 27 05:39:35.620000 audit: BPF prog-id=58 op=LOAD Jan 27 05:39:35.626534 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 05:39:35.642932 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 27 05:39:35.644115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.649550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:39:35.649995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:39:35.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.655344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.655498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:39:35.658699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 05:39:35.659250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:39:35.659406 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:39:35.659496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:39:35.659582 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.666795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.667001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 05:39:35.669259 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 05:39:35.680000 audit[1567]: SYSTEM_BOOT pid=1567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.680884 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 27 05:39:35.682526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 05:39:35.682710 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 05:39:35.682798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 05:39:35.682955 systemd[1]: Reached target time-set.target - System Time Set. Jan 27 05:39:35.684371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 05:39:35.699225 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 27 05:39:35.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.701822 systemd[1]: Finished ensure-sysext.service. Jan 27 05:39:35.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.720576 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 27 05:39:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.725051 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 05:39:35.728260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 05:39:35.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.729162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 05:39:35.731256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 05:39:35.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.733468 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 05:39:35.733938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 05:39:35.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.739521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 05:39:35.779600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 05:39:35.780254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 05:39:35.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.782784 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 27 05:39:35.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:35.797125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 05:39:35.803271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:35.851000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 27 05:39:35.851000 audit[1605]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd2fe9310 a2=420 a3=0 items=0 ppid=1556 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:35.851000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:39:35.851652 augenrules[1605]: No rules Jan 27 05:39:35.855755 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 05:39:35.857228 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 05:39:35.873679 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 27 05:39:35.873753 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 27 05:39:35.887196 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 27 05:39:35.891100 kernel: PTP clock support registered Jan 27 05:39:35.896860 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 27 05:39:35.898245 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 27 05:39:35.913202 kernel: Console: switching to colour dummy device 80x25 Jan 27 05:39:35.919364 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 27 05:39:35.928442 systemd-networkd[1565]: lo: Link UP Jan 27 05:39:35.928450 systemd-networkd[1565]: lo: Gained carrier Jan 27 05:39:35.931985 systemd-networkd[1565]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:39:35.931993 systemd-networkd[1565]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 05:39:35.932022 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 05:39:35.932209 systemd[1]: Reached target network.target - Network. Jan 27 05:39:35.932904 systemd-networkd[1565]: eth0: Link UP Jan 27 05:39:35.933050 systemd-networkd[1565]: eth0: Gained carrier Jan 27 05:39:35.933064 systemd-networkd[1565]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 05:39:35.934362 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 27 05:39:35.936378 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 27 05:39:35.943007 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 27 05:39:35.943060 kernel: [drm] features: -context_init Jan 27 05:39:35.942759 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 27 05:39:35.942982 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 27 05:39:35.946252 systemd-networkd[1565]: eth0: DHCPv4 address 10.0.2.139/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 27 05:39:35.952190 kernel: [drm] number of scanouts: 1 Jan 27 05:39:35.967604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:35.969805 kernel: [drm] number of cap sets: 0 Jan 27 05:39:35.971220 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 27 05:39:35.974306 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 27 05:39:35.987202 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 27 05:39:35.987280 kernel: Console: switching to colour frame buffer device 160x50 Jan 27 05:39:36.002671 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 27 05:39:36.008773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 05:39:36.008958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:36.011002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:36.012593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 05:39:36.052170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 05:39:36.568718 ldconfig[1562]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 27 05:39:36.576154 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 27 05:39:36.579214 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 27 05:39:36.597332 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 27 05:39:36.598864 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 05:39:36.599918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 27 05:39:36.600502 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 27 05:39:36.601017 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 27 05:39:36.601291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 27 05:39:36.601630 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 27 05:39:36.601705 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 27 05:39:36.601823 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 27 05:39:36.601877 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 27 05:39:36.601925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 27 05:39:36.601950 systemd[1]: Reached target paths.target - Path Units. Jan 27 05:39:36.601992 systemd[1]: Reached target timers.target - Timer Units. Jan 27 05:39:36.603444 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 27 05:39:36.604722 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 27 05:39:36.606888 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 27 05:39:36.608344 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 27 05:39:36.609609 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 27 05:39:36.613727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 27 05:39:36.614469 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 27 05:39:36.615477 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 27 05:39:36.616704 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 05:39:36.618803 systemd[1]: Reached target basic.target - Basic System. Jan 27 05:39:36.619246 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 27 05:39:36.619275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 27 05:39:36.621201 systemd[1]: Starting chronyd.service - NTP client/server... Jan 27 05:39:36.626273 systemd[1]: Starting containerd.service - containerd container runtime... Jan 27 05:39:36.630878 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 27 05:39:36.632932 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 27 05:39:36.637417 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 27 05:39:36.643115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 27 05:39:36.647334 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 27 05:39:36.649607 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 27 05:39:36.669249 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:36.672140 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 27 05:39:36.686425 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 27 05:39:36.690495 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 27 05:39:36.692811 jq[1637]: false Jan 27 05:39:36.697313 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 27 05:39:36.702398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 27 05:39:36.711197 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Refreshing passwd entry cache Jan 27 05:39:36.713328 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 27 05:39:36.714747 oslogin_cache_refresh[1639]: Refreshing passwd entry cache Jan 27 05:39:36.714748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 27 05:39:36.726904 extend-filesystems[1638]: Found /dev/vda6 Jan 27 05:39:36.729375 chronyd[1632]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 27 05:39:36.733240 extend-filesystems[1638]: Found /dev/vda9 Jan 27 05:39:36.731601 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 27 05:39:36.732111 chronyd[1632]: Loaded seccomp filter (level 2) Jan 27 05:39:36.737343 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Failure getting users, quitting Jan 27 05:39:36.737343 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 05:39:36.737343 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Refreshing group entry cache Jan 27 05:39:36.737410 extend-filesystems[1638]: Checking size of /dev/vda9 Jan 27 05:39:36.734434 oslogin_cache_refresh[1639]: Failure getting users, quitting Jan 27 05:39:36.738126 systemd[1]: Starting update-engine.service - Update Engine... Jan 27 05:39:36.734453 oslogin_cache_refresh[1639]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 05:39:36.734500 oslogin_cache_refresh[1639]: Refreshing group entry cache Jan 27 05:39:36.743342 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Failure getting groups, quitting Jan 27 05:39:36.743342 google_oslogin_nss_cache[1639]: oslogin_cache_refresh[1639]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 05:39:36.742335 oslogin_cache_refresh[1639]: Failure getting groups, quitting Jan 27 05:39:36.742347 oslogin_cache_refresh[1639]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 05:39:36.744426 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 27 05:39:36.747133 systemd[1]: Started chronyd.service - NTP client/server. Jan 27 05:39:36.752794 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 27 05:39:36.754693 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 27 05:39:36.754949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 27 05:39:36.755201 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 27 05:39:36.755476 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 27 05:39:36.758541 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 27 05:39:36.758821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 27 05:39:36.783355 extend-filesystems[1638]: Resized partition /dev/vda9 Jan 27 05:39:36.791055 jq[1658]: true Jan 27 05:39:36.791884 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Jan 27 05:39:36.801300 systemd[1]: motdgen.service: Deactivated successfully. Jan 27 05:39:36.801915 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 27 05:39:36.806881 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 27 05:39:36.814547 update_engine[1651]: I20260127 05:39:36.814455 1651 main.cc:92] Flatcar Update Engine starting Jan 27 05:39:36.816444 tar[1662]: linux-amd64/LICENSE Jan 27 05:39:36.816645 tar[1662]: linux-amd64/helm Jan 27 05:39:36.831235 dbus-daemon[1635]: [system] SELinux support is enabled Jan 27 05:39:36.831435 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 27 05:39:36.836988 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 27 05:39:36.837021 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 27 05:39:36.840107 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 27 05:39:36.840126 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 27 05:39:36.843310 jq[1685]: true Jan 27 05:39:36.851024 systemd[1]: Started update-engine.service - Update Engine. Jan 27 05:39:36.858264 update_engine[1651]: I20260127 05:39:36.854387 1651 update_check_scheduler.cc:74] Next update check in 3m15s Jan 27 05:39:36.855587 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 27 05:39:36.993910 systemd-logind[1646]: New seat seat0. Jan 27 05:39:37.006005 locksmithd[1691]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 27 05:39:37.031204 systemd-logind[1646]: Watching system buttons on /dev/input/event3 (Power Button) Jan 27 05:39:37.031229 systemd-logind[1646]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 27 05:39:37.031549 systemd[1]: Started systemd-logind.service - User Login Management. Jan 27 05:39:37.101528 bash[1709]: Updated "/home/core/.ssh/authorized_keys" Jan 27 05:39:37.102714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 27 05:39:37.108441 systemd[1]: Starting sshkeys.service... Jan 27 05:39:37.129637 containerd[1684]: time="2026-01-27T05:39:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 27 05:39:37.131688 containerd[1684]: time="2026-01-27T05:39:37.131655185Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 27 05:39:37.152147 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 27 05:39:37.154929 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 27 05:39:37.166006 containerd[1684]: time="2026-01-27T05:39:37.165946530Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.571µs" Jan 27 05:39:37.166006 containerd[1684]: time="2026-01-27T05:39:37.166003782Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 27 05:39:37.166109 containerd[1684]: time="2026-01-27T05:39:37.166044225Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 27 05:39:37.166109 containerd[1684]: time="2026-01-27T05:39:37.166060973Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 27 05:39:37.166268 containerd[1684]: time="2026-01-27T05:39:37.166251693Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 27 05:39:37.166292 containerd[1684]: time="2026-01-27T05:39:37.166274463Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166340 containerd[1684]: time="2026-01-27T05:39:37.166325891Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166340617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166547171Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166559862Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166574407Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166585113Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166710458Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166721670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166777201Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.166934 containerd[1684]: time="2026-01-27T05:39:37.166913315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.167086 containerd[1684]: time="2026-01-27T05:39:37.166939609Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 05:39:37.167086 containerd[1684]: time="2026-01-27T05:39:37.166951685Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 27 05:39:37.167086 containerd[1684]: time="2026-01-27T05:39:37.166975762Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 27 05:39:37.168539 containerd[1684]: time="2026-01-27T05:39:37.168515135Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 27 05:39:37.168609 containerd[1684]: time="2026-01-27T05:39:37.168591253Z" level=info msg="metadata content store policy set" policy=shared Jan 27 05:39:37.177197 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:37.217431 containerd[1684]: time="2026-01-27T05:39:37.217383886Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 27 05:39:37.217540 containerd[1684]: time="2026-01-27T05:39:37.217453735Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 05:39:37.217634 containerd[1684]: time="2026-01-27T05:39:37.217552890Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 05:39:37.217634 containerd[1684]: time="2026-01-27T05:39:37.217570823Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 27 05:39:37.217634 containerd[1684]: time="2026-01-27T05:39:37.217582884Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 27 05:39:37.217634 containerd[1684]: time="2026-01-27T05:39:37.217616041Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 27 05:39:37.217634 containerd[1684]: time="2026-01-27T05:39:37.217638994Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217650684Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217662227Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217673693Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217684990Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217696489Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217705790Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 27 05:39:37.217737 containerd[1684]: time="2026-01-27T05:39:37.217717279Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 27 05:39:37.217860 containerd[1684]: time="2026-01-27T05:39:37.217840018Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 27 05:39:37.217860 containerd[1684]: time="2026-01-27T05:39:37.217857768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 27 05:39:37.217895 containerd[1684]: time="2026-01-27T05:39:37.217872207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 27 05:39:37.217895 containerd[1684]: time="2026-01-27T05:39:37.217883042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 27 05:39:37.217895 containerd[1684]: time="2026-01-27T05:39:37.217892834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 27 05:39:37.217954 containerd[1684]: time="2026-01-27T05:39:37.217902339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 27 05:39:37.217954 containerd[1684]: time="2026-01-27T05:39:37.217919237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 27 05:39:37.217954 containerd[1684]: time="2026-01-27T05:39:37.217936388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 27 05:39:37.218007 containerd[1684]: time="2026-01-27T05:39:37.217956308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 27 05:39:37.218007 containerd[1684]: time="2026-01-27T05:39:37.217969209Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 27 05:39:37.218007 containerd[1684]: time="2026-01-27T05:39:37.217978200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 27 05:39:37.218007 containerd[1684]: time="2026-01-27T05:39:37.217999154Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 27 05:39:37.218075 containerd[1684]: time="2026-01-27T05:39:37.218052537Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 27 05:39:37.218075 containerd[1684]: time="2026-01-27T05:39:37.218066129Z" level=info msg="Start snapshots syncer" Jan 27 05:39:37.218128 containerd[1684]: time="2026-01-27T05:39:37.218118874Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 27 05:39:37.218465 containerd[1684]: time="2026-01-27T05:39:37.218430559Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 27 05:39:37.218600 containerd[1684]: time="2026-01-27T05:39:37.218484915Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 27 05:39:37.218600 containerd[1684]: time="2026-01-27T05:39:37.218534791Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 27 05:39:37.218639 containerd[1684]: time="2026-01-27T05:39:37.218624802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218662758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218685066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218695988Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218707091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218717052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218726120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218735684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 27 05:39:37.218757 containerd[1684]: time="2026-01-27T05:39:37.218744982Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218778388Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218791027Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218798829Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218807136Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218814545Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218823628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218832926Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218843890Z" level=info msg="runtime interface created" Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218848717Z" level=info msg="created NRI interface" Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218855759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218865837Z" level=info msg="Connect containerd service" Jan 27 05:39:37.219267 containerd[1684]: time="2026-01-27T05:39:37.218894651Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 27 05:39:37.220117 sshd_keygen[1660]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 27 05:39:37.222061 containerd[1684]: time="2026-01-27T05:39:37.221419462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 05:39:37.279700 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 27 05:39:37.286669 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 27 05:39:37.311727 systemd[1]: issuegen.service: Deactivated successfully. Jan 27 05:39:37.312388 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 27 05:39:37.318486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342268401Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342326634Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342353955Z" level=info msg="Start subscribing containerd event" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342378451Z" level=info msg="Start recovering state" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342472982Z" level=info msg="Start event monitor" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342484734Z" level=info msg="Start cni network conf syncer for default" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342496268Z" level=info msg="Start streaming server" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342507494Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342514577Z" level=info msg="runtime interface starting up..." Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342520109Z" level=info msg="starting plugins..." Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342532149Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 27 05:39:37.343197 containerd[1684]: time="2026-01-27T05:39:37.342619049Z" level=info msg="containerd successfully booted in 0.213659s" Jan 27 05:39:37.342879 systemd[1]: Started containerd.service - containerd container runtime. Jan 27 05:39:37.345131 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 27 05:39:37.350150 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 27 05:39:37.354491 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 27 05:39:37.356718 systemd[1]: Reached target getty.target - Login Prompts. Jan 27 05:39:37.410206 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 27 05:39:37.438977 extend-filesystems[1680]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 27 05:39:37.438977 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 27 05:39:37.438977 extend-filesystems[1680]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 27 05:39:37.446289 extend-filesystems[1638]: Resized filesystem in /dev/vda9 Jan 27 05:39:37.439918 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 27 05:39:37.440188 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 27 05:39:37.498319 systemd-networkd[1565]: eth0: Gained IPv6LL Jan 27 05:39:37.500057 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 27 05:39:37.503343 systemd[1]: Reached target network-online.target - Network is Online. Jan 27 05:39:37.506846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:37.510440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 27 05:39:37.542668 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 27 05:39:37.563198 tar[1662]: linux-amd64/README.md Jan 27 05:39:37.579587 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 27 05:39:37.727210 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:38.191224 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:38.710265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:38.723595 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:39:39.470812 kubelet[1774]: E0127 05:39:39.470765 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:39:39.473735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:39:39.473931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:39:39.474775 systemd[1]: kubelet.service: Consumed 984ms CPU time, 258.5M memory peak. Jan 27 05:39:39.735229 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:40.203225 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:40.223648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 27 05:39:40.229598 systemd[1]: Started sshd@0-10.0.2.139:22-4.153.228.146:38102.service - OpenSSH per-connection server daemon (4.153.228.146:38102). Jan 27 05:39:40.818207 sshd[1784]: Accepted publickey for core from 4.153.228.146 port 38102 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:40.820729 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:40.831161 systemd-logind[1646]: New session 1 of user core. Jan 27 05:39:40.832866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 27 05:39:40.834526 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 27 05:39:40.866183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 27 05:39:40.869328 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 27 05:39:40.885429 (systemd)[1790]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:40.888140 systemd-logind[1646]: New session 2 of user core. Jan 27 05:39:41.019577 systemd[1790]: Queued start job for default target default.target. Jan 27 05:39:41.030492 systemd[1790]: Created slice app.slice - User Application Slice. Jan 27 05:39:41.030523 systemd[1790]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 27 05:39:41.030538 systemd[1790]: Reached target paths.target - Paths. Jan 27 05:39:41.030583 systemd[1790]: Reached target timers.target - Timers. Jan 27 05:39:41.031872 systemd[1790]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 27 05:39:41.034362 systemd[1790]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 27 05:39:41.045486 systemd[1790]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 27 05:39:41.045590 systemd[1790]: Reached target sockets.target - Sockets. Jan 27 05:39:41.047338 systemd[1790]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 27 05:39:41.047444 systemd[1790]: Reached target basic.target - Basic System. Jan 27 05:39:41.047498 systemd[1790]: Reached target default.target - Main User Target. Jan 27 05:39:41.047526 systemd[1790]: Startup finished in 153ms. Jan 27 05:39:41.047706 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 27 05:39:41.051481 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 27 05:39:41.362774 systemd[1]: Started sshd@1-10.0.2.139:22-4.153.228.146:38112.service - OpenSSH per-connection server daemon (4.153.228.146:38112). Jan 27 05:39:41.896905 sshd[1804]: Accepted publickey for core from 4.153.228.146 port 38112 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:41.898290 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:41.903952 systemd-logind[1646]: New session 3 of user core. Jan 27 05:39:41.910669 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 27 05:39:42.212959 sshd[1808]: Connection closed by 4.153.228.146 port 38112 Jan 27 05:39:42.215153 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:42.221485 systemd-logind[1646]: Session 3 logged out. Waiting for processes to exit. Jan 27 05:39:42.222082 systemd[1]: sshd@1-10.0.2.139:22-4.153.228.146:38112.service: Deactivated successfully. Jan 27 05:39:42.224750 systemd[1]: session-3.scope: Deactivated successfully. Jan 27 05:39:42.228273 systemd-logind[1646]: Removed session 3. Jan 27 05:39:42.327838 systemd[1]: Started sshd@2-10.0.2.139:22-4.153.228.146:38124.service - OpenSSH per-connection server daemon (4.153.228.146:38124). Jan 27 05:39:42.883256 sshd[1814]: Accepted publickey for core from 4.153.228.146 port 38124 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:42.885132 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:42.895365 systemd-logind[1646]: New session 4 of user core. Jan 27 05:39:42.917561 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 27 05:39:43.193390 sshd[1822]: Connection closed by 4.153.228.146 port 38124 Jan 27 05:39:43.194724 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:43.202064 systemd[1]: sshd@2-10.0.2.139:22-4.153.228.146:38124.service: Deactivated successfully. Jan 27 05:39:43.204113 systemd[1]: session-4.scope: Deactivated successfully. Jan 27 05:39:43.205648 systemd-logind[1646]: Session 4 logged out. Waiting for processes to exit. Jan 27 05:39:43.206570 systemd-logind[1646]: Removed session 4. Jan 27 05:39:43.747212 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:43.754618 coreos-metadata[1634]: Jan 27 05:39:43.754 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:39:43.774373 coreos-metadata[1634]: Jan 27 05:39:43.774 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 27 05:39:44.237208 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 05:39:44.256731 coreos-metadata[1718]: Jan 27 05:39:44.256 WARN failed to locate config-drive, using the metadata service API instead Jan 27 05:39:44.270134 coreos-metadata[1718]: Jan 27 05:39:44.270 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 27 05:39:44.913279 coreos-metadata[1634]: Jan 27 05:39:44.913 INFO Fetch successful Jan 27 05:39:44.914339 coreos-metadata[1634]: Jan 27 05:39:44.913 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 05:39:45.513503 coreos-metadata[1718]: Jan 27 05:39:45.513 INFO Fetch successful Jan 27 05:39:45.513503 coreos-metadata[1718]: Jan 27 05:39:45.513 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 27 05:39:46.118583 coreos-metadata[1634]: Jan 27 05:39:46.118 INFO Fetch successful Jan 27 05:39:46.118583 coreos-metadata[1634]: Jan 27 05:39:46.118 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 27 05:39:46.667076 coreos-metadata[1718]: Jan 27 05:39:46.666 INFO Fetch successful Jan 27 05:39:46.675526 unknown[1718]: wrote ssh authorized keys file for user: core Jan 27 05:39:46.710475 coreos-metadata[1634]: Jan 27 05:39:46.710 INFO Fetch successful Jan 27 05:39:46.710475 coreos-metadata[1634]: Jan 27 05:39:46.710 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 27 05:39:46.716855 update-ssh-keys[1831]: Updated "/home/core/.ssh/authorized_keys" Jan 27 05:39:46.717889 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 27 05:39:46.722225 systemd[1]: Finished sshkeys.service. Jan 27 05:39:47.298084 coreos-metadata[1634]: Jan 27 05:39:47.297 INFO Fetch successful Jan 27 05:39:47.298084 coreos-metadata[1634]: Jan 27 05:39:47.298 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 27 05:39:49.660448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 27 05:39:49.662859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:39:49.837869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:39:49.849517 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:39:49.885539 kubelet[1843]: E0127 05:39:49.885495 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:39:49.889305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:39:49.889444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:39:49.890001 systemd[1]: kubelet.service: Consumed 164ms CPU time, 110.1M memory peak. Jan 27 05:39:49.950491 coreos-metadata[1634]: Jan 27 05:39:49.950 INFO Fetch successful Jan 27 05:39:49.950491 coreos-metadata[1634]: Jan 27 05:39:49.950 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 27 05:39:50.531854 coreos-metadata[1634]: Jan 27 05:39:50.531 INFO Fetch successful Jan 27 05:39:50.569399 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 27 05:39:50.570247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 27 05:39:50.570499 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 27 05:39:50.572467 systemd[1]: Startup finished in 3.639s (kernel) + 13.530s (initrd) + 17.307s (userspace) = 34.478s. Jan 27 05:39:53.305824 systemd[1]: Started sshd@3-10.0.2.139:22-4.153.228.146:37832.service - OpenSSH per-connection server daemon (4.153.228.146:37832). Jan 27 05:39:53.852278 sshd[1856]: Accepted publickey for core from 4.153.228.146 port 37832 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:53.854643 sshd-session[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:53.865544 systemd-logind[1646]: New session 5 of user core. Jan 27 05:39:53.881462 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 27 05:39:54.153421 sshd[1860]: Connection closed by 4.153.228.146 port 37832 Jan 27 05:39:54.154636 sshd-session[1856]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:54.162593 systemd[1]: sshd@3-10.0.2.139:22-4.153.228.146:37832.service: Deactivated successfully. Jan 27 05:39:54.167713 systemd[1]: session-5.scope: Deactivated successfully. Jan 27 05:39:54.170162 systemd-logind[1646]: Session 5 logged out. Waiting for processes to exit. Jan 27 05:39:54.173632 systemd-logind[1646]: Removed session 5. Jan 27 05:39:54.264854 systemd[1]: Started sshd@4-10.0.2.139:22-4.153.228.146:47616.service - OpenSSH per-connection server daemon (4.153.228.146:47616). Jan 27 05:39:54.821035 sshd[1866]: Accepted publickey for core from 4.153.228.146 port 47616 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:54.821742 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:54.825963 systemd-logind[1646]: New session 6 of user core. Jan 27 05:39:54.838459 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 27 05:39:55.114132 sshd[1870]: Connection closed by 4.153.228.146 port 47616 Jan 27 05:39:55.116563 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:55.126161 systemd[1]: sshd@4-10.0.2.139:22-4.153.228.146:47616.service: Deactivated successfully. Jan 27 05:39:55.130741 systemd[1]: session-6.scope: Deactivated successfully. Jan 27 05:39:55.133131 systemd-logind[1646]: Session 6 logged out. Waiting for processes to exit. Jan 27 05:39:55.136011 systemd-logind[1646]: Removed session 6. Jan 27 05:39:55.224576 systemd[1]: Started sshd@5-10.0.2.139:22-4.153.228.146:47620.service - OpenSSH per-connection server daemon (4.153.228.146:47620). Jan 27 05:39:55.757651 sshd[1876]: Accepted publickey for core from 4.153.228.146 port 47620 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:55.759256 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:55.764556 systemd-logind[1646]: New session 7 of user core. Jan 27 05:39:55.778441 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 27 05:39:56.058557 sshd[1880]: Connection closed by 4.153.228.146 port 47620 Jan 27 05:39:56.059514 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:56.069448 systemd[1]: sshd@5-10.0.2.139:22-4.153.228.146:47620.service: Deactivated successfully. Jan 27 05:39:56.073353 systemd[1]: session-7.scope: Deactivated successfully. Jan 27 05:39:56.076905 systemd-logind[1646]: Session 7 logged out. Waiting for processes to exit. Jan 27 05:39:56.078364 systemd-logind[1646]: Removed session 7. Jan 27 05:39:56.175032 systemd[1]: Started sshd@6-10.0.2.139:22-4.153.228.146:47636.service - OpenSSH per-connection server daemon (4.153.228.146:47636). Jan 27 05:39:56.754909 sshd[1886]: Accepted publickey for core from 4.153.228.146 port 47636 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:56.756556 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:56.765288 systemd-logind[1646]: New session 8 of user core. Jan 27 05:39:56.776485 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 27 05:39:57.003098 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 27 05:39:57.004639 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:39:57.025533 sudo[1891]: pam_unix(sudo:session): session closed for user root Jan 27 05:39:57.126436 sshd[1890]: Connection closed by 4.153.228.146 port 47636 Jan 27 05:39:57.127967 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:57.137247 systemd[1]: sshd@6-10.0.2.139:22-4.153.228.146:47636.service: Deactivated successfully. Jan 27 05:39:57.141541 systemd[1]: session-8.scope: Deactivated successfully. Jan 27 05:39:57.143803 systemd-logind[1646]: Session 8 logged out. Waiting for processes to exit. Jan 27 05:39:57.147370 systemd-logind[1646]: Removed session 8. Jan 27 05:39:57.237076 systemd[1]: Started sshd@7-10.0.2.139:22-4.153.228.146:47638.service - OpenSSH per-connection server daemon (4.153.228.146:47638). Jan 27 05:39:57.759050 sshd[1898]: Accepted publickey for core from 4.153.228.146 port 47638 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:57.760327 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:57.765076 systemd-logind[1646]: New session 9 of user core. Jan 27 05:39:57.771361 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 27 05:39:57.954654 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 27 05:39:57.954901 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:39:57.958414 sudo[1904]: pam_unix(sudo:session): session closed for user root Jan 27 05:39:57.963946 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 27 05:39:57.964272 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:39:57.971336 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 05:39:58.007928 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 27 05:39:58.008030 kernel: audit: type=1305 audit(1769492398.005:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 05:39:58.005000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 05:39:58.009214 kernel: audit: type=1300 audit(1769492398.005:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f3f4760 a2=420 a3=0 items=0 ppid=1909 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:58.005000 audit[1928]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3f3f4760 a2=420 a3=0 items=0 ppid=1909 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:58.005000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:39:58.013377 kernel: audit: type=1327 audit(1769492398.005:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 05:39:58.013441 augenrules[1928]: No rules Jan 27 05:39:58.014682 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 05:39:58.014972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 05:39:58.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.016377 sudo[1903]: pam_unix(sudo:session): session closed for user root Jan 27 05:39:58.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.020572 kernel: audit: type=1130 audit(1769492398.015:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.020637 kernel: audit: type=1131 audit(1769492398.015:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.016000 audit[1903]: USER_END pid=1903 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.023091 kernel: audit: type=1106 audit(1769492398.016:235): pid=1903 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.023130 kernel: audit: type=1104 audit(1769492398.016:236): pid=1903 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.016000 audit[1903]: CRED_DISP pid=1903 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.111238 sshd[1902]: Connection closed by 4.153.228.146 port 47638 Jan 27 05:39:58.111911 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Jan 27 05:39:58.114000 audit[1898]: USER_END pid=1898 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.115000 audit[1898]: CRED_DISP pid=1898 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.125691 systemd[1]: sshd@7-10.0.2.139:22-4.153.228.146:47638.service: Deactivated successfully. Jan 27 05:39:58.127931 systemd[1]: session-9.scope: Deactivated successfully. Jan 27 05:39:58.130707 kernel: audit: type=1106 audit(1769492398.114:237): pid=1898 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.130750 kernel: audit: type=1104 audit(1769492398.115:238): pid=1898 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.131247 kernel: audit: type=1131 audit(1769492398.125:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.2.139:22-4.153.228.146:47638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.2.139:22-4.153.228.146:47638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.138727 systemd-logind[1646]: Session 9 logged out. Waiting for processes to exit. Jan 27 05:39:58.139792 systemd-logind[1646]: Removed session 9. Jan 27 05:39:58.226066 systemd[1]: Started sshd@8-10.0.2.139:22-4.153.228.146:47650.service - OpenSSH per-connection server daemon (4.153.228.146:47650). Jan 27 05:39:58.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.139:22-4.153.228.146:47650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:39:58.800000 audit[1937]: USER_ACCT pid=1937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.800897 sshd[1937]: Accepted publickey for core from 4.153.228.146 port 47650 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:39:58.802000 audit[1937]: CRED_ACQ pid=1937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.802000 audit[1937]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbf3863b0 a2=3 a3=0 items=0 ppid=1 pid=1937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:39:58.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:39:58.803904 sshd-session[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:39:58.810072 systemd-logind[1646]: New session 10 of user core. Jan 27 05:39:58.821757 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 27 05:39:58.825000 audit[1937]: USER_START pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:58.828000 audit[1941]: CRED_ACQ pid=1941 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:39:59.016000 audit[1942]: USER_ACCT pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:59.016000 audit[1942]: CRED_REFR pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:59.017000 audit[1942]: USER_START pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:39:59.016876 sudo[1942]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 27 05:39:59.017576 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 05:39:59.573504 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 27 05:39:59.592707 (dockerd)[1961]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 27 05:39:59.911024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 27 05:39:59.913730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:00.005280 dockerd[1961]: time="2026-01-27T05:40:00.005228110Z" level=info msg="Starting up" Jan 27 05:40:00.008100 dockerd[1961]: time="2026-01-27T05:40:00.007743441Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 27 05:40:00.023229 dockerd[1961]: time="2026-01-27T05:40:00.022741514Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 27 05:40:00.051879 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport260786744-merged.mount: Deactivated successfully. Jan 27 05:40:00.055826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:00.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:00.064412 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:40:00.091919 dockerd[1961]: time="2026-01-27T05:40:00.091877356Z" level=info msg="Loading containers: start." Jan 27 05:40:00.102735 kernel: Initializing XFRM netlink socket Jan 27 05:40:00.113571 kubelet[1990]: E0127 05:40:00.113542 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:40:00.115925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:40:00.116525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:40:00.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:40:00.117075 systemd[1]: kubelet.service: Consumed 141ms CPU time, 109.9M memory peak. Jan 27 05:40:00.175000 audit[2026]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.175000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee22b7e50 a2=0 a3=0 items=0 ppid=1961 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 05:40:00.178000 audit[2028]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.178000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd5a9c0640 a2=0 a3=0 items=0 ppid=1961 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 05:40:00.179000 audit[2030]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.179000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf9fad780 a2=0 a3=0 items=0 ppid=1961 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 05:40:00.181000 audit[2032]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.181000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbc4461b0 a2=0 a3=0 items=0 ppid=1961 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 05:40:00.183000 audit[2034]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.183000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa288a9a0 a2=0 a3=0 items=0 ppid=1961 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 05:40:00.185000 audit[2036]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.185000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd39656130 a2=0 a3=0 items=0 ppid=1961 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:40:00.187000 audit[2038]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.187000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe81008440 a2=0 a3=0 items=0 ppid=1961 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:40:00.189000 audit[2040]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.189000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd01bb6a20 a2=0 a3=0 items=0 ppid=1961 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 05:40:00.227000 audit[2043]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.227000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffecd234300 a2=0 a3=0 items=0 ppid=1961 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 27 05:40:00.229000 audit[2045]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.229000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee0671bd0 a2=0 a3=0 items=0 ppid=1961 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 05:40:00.231000 audit[2047]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.231000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe16ba4530 a2=0 a3=0 items=0 ppid=1961 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 05:40:00.232000 audit[2049]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.232000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe02890da0 a2=0 a3=0 items=0 ppid=1961 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:40:00.234000 audit[2051]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.234000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde9ce05e0 a2=0 a3=0 items=0 ppid=1961 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 05:40:00.272000 audit[2081]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.272000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef102e490 a2=0 a3=0 items=0 ppid=1961 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 05:40:00.274000 audit[2083]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.274000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdcdd50b80 a2=0 a3=0 items=0 ppid=1961 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.274000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 05:40:00.276000 audit[2085]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.276000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b70dec0 a2=0 a3=0 items=0 ppid=1961 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 05:40:00.278000 audit[2087]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.278000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0c4c2e00 a2=0 a3=0 items=0 ppid=1961 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 05:40:00.279000 audit[2089]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.279000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff05770e10 a2=0 a3=0 items=0 ppid=1961 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.279000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 05:40:00.281000 audit[2091]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.281000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe79301d80 a2=0 a3=0 items=0 ppid=1961 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:40:00.283000 audit[2093]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.283000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeaa980480 a2=0 a3=0 items=0 ppid=1961 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:40:00.285000 audit[2095]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.285000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd9b260f50 a2=0 a3=0 items=0 ppid=1961 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 05:40:00.287000 audit[2097]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.287000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcaa58ccf0 a2=0 a3=0 items=0 ppid=1961 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 27 05:40:00.290000 audit[2099]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.290000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe16820f0 a2=0 a3=0 items=0 ppid=1961 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 05:40:00.291000 audit[2101]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.291000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffebf6a0100 a2=0 a3=0 items=0 ppid=1961 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 05:40:00.293000 audit[2103]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.293000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffc45c26e0 a2=0 a3=0 items=0 ppid=1961 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 05:40:00.295000 audit[2105]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.295000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdf739ed80 a2=0 a3=0 items=0 ppid=1961 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 05:40:00.300000 audit[2110]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.300000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf85b57c0 a2=0 a3=0 items=0 ppid=1961 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 05:40:00.302000 audit[2112]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.302000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea5f20170 a2=0 a3=0 items=0 ppid=1961 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 05:40:00.303000 audit[2114]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.303000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce84d6f50 a2=0 a3=0 items=0 ppid=1961 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 05:40:00.305000 audit[2116]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.305000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd81e38040 a2=0 a3=0 items=0 ppid=1961 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 05:40:00.307000 audit[2118]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.307000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcbe10f390 a2=0 a3=0 items=0 ppid=1961 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 05:40:00.309000 audit[2120]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:00.309000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd847d6f40 a2=0 a3=0 items=0 ppid=1961 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 05:40:00.343000 audit[2124]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.343000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffce6c2ce30 a2=0 a3=0 items=0 ppid=1961 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 27 05:40:00.346000 audit[2126]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.346000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff9a2ac4d0 a2=0 a3=0 items=0 ppid=1961 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 27 05:40:00.353000 audit[2134]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.353000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffdcb5371c0 a2=0 a3=0 items=0 ppid=1961 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 27 05:40:00.363000 audit[2140]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.363000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe5bec3d30 a2=0 a3=0 items=0 ppid=1961 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 27 05:40:00.365000 audit[2142]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.365000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdc93d40d0 a2=0 a3=0 items=0 ppid=1961 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 27 05:40:00.367000 audit[2144]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.367000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd030db60 a2=0 a3=0 items=0 ppid=1961 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.367000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 27 05:40:00.369000 audit[2146]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.369000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffefc003490 a2=0 a3=0 items=0 ppid=1961 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.369000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 05:40:00.371000 audit[2148]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:00.371000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff819d69a0 a2=0 a3=0 items=0 ppid=1961 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:00.371000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 27 05:40:00.371931 systemd-networkd[1565]: docker0: Link UP Jan 27 05:40:00.381451 dockerd[1961]: time="2026-01-27T05:40:00.381384232Z" level=info msg="Loading containers: done." Jan 27 05:40:00.418310 dockerd[1961]: time="2026-01-27T05:40:00.418263019Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 27 05:40:00.418481 dockerd[1961]: time="2026-01-27T05:40:00.418343960Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 27 05:40:00.418481 dockerd[1961]: time="2026-01-27T05:40:00.418418132Z" level=info msg="Initializing buildkit" Jan 27 05:40:00.448839 dockerd[1961]: time="2026-01-27T05:40:00.448730390Z" level=info msg="Completed buildkit initialization" Jan 27 05:40:00.455878 dockerd[1961]: time="2026-01-27T05:40:00.455837112Z" level=info msg="Daemon has completed initialization" Jan 27 05:40:00.456056 dockerd[1961]: time="2026-01-27T05:40:00.455988814Z" level=info msg="API listen on /run/docker.sock" Jan 27 05:40:00.457105 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 27 05:40:00.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:00.517987 chronyd[1632]: Selected source PHC0 Jan 27 05:40:00.518013 chronyd[1632]: System clock wrong by 1.085834 seconds Jan 27 05:40:01.604496 chronyd[1632]: System clock was stepped by 1.085834 seconds Jan 27 05:40:01.605245 systemd-resolved[1339]: Clock change detected. Flushing caches. Jan 27 05:40:02.135355 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3982911187-merged.mount: Deactivated successfully. Jan 27 05:40:02.852741 containerd[1684]: time="2026-01-27T05:40:02.852685290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 27 05:40:03.630103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560672547.mount: Deactivated successfully. Jan 27 05:40:04.484034 containerd[1684]: time="2026-01-27T05:40:04.483636849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:04.486194 containerd[1684]: time="2026-01-27T05:40:04.486171185Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Jan 27 05:40:04.487849 containerd[1684]: time="2026-01-27T05:40:04.487815500Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:04.501030 containerd[1684]: time="2026-01-27T05:40:04.500590071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:04.501174 containerd[1684]: time="2026-01-27T05:40:04.501130625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.648398086s" Jan 27 05:40:04.501218 containerd[1684]: time="2026-01-27T05:40:04.501192598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 27 05:40:04.502127 containerd[1684]: time="2026-01-27T05:40:04.502110957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 27 05:40:05.602869 containerd[1684]: time="2026-01-27T05:40:05.602823041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:05.604059 containerd[1684]: time="2026-01-27T05:40:05.604032029Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 27 05:40:05.605939 containerd[1684]: time="2026-01-27T05:40:05.605882039Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:05.609559 containerd[1684]: time="2026-01-27T05:40:05.609502458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:05.610457 containerd[1684]: time="2026-01-27T05:40:05.610351740Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.108096816s" Jan 27 05:40:05.610457 containerd[1684]: time="2026-01-27T05:40:05.610379094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 27 05:40:05.610802 containerd[1684]: time="2026-01-27T05:40:05.610774131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 27 05:40:06.526033 containerd[1684]: time="2026-01-27T05:40:06.525465679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:06.533864 containerd[1684]: time="2026-01-27T05:40:06.533833450Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 27 05:40:06.535298 containerd[1684]: time="2026-01-27T05:40:06.535277829Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:06.539208 containerd[1684]: time="2026-01-27T05:40:06.539177957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:06.539874 containerd[1684]: time="2026-01-27T05:40:06.539852043Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 929.047103ms" Jan 27 05:40:06.539942 containerd[1684]: time="2026-01-27T05:40:06.539931684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 27 05:40:06.540811 containerd[1684]: time="2026-01-27T05:40:06.540784659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 27 05:40:07.475657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411667851.mount: Deactivated successfully. Jan 27 05:40:07.738670 containerd[1684]: time="2026-01-27T05:40:07.738527352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:07.740325 containerd[1684]: time="2026-01-27T05:40:07.740180552Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 27 05:40:07.741794 containerd[1684]: time="2026-01-27T05:40:07.741774728Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:07.744488 containerd[1684]: time="2026-01-27T05:40:07.744436289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:07.745284 containerd[1684]: time="2026-01-27T05:40:07.745251397Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.204432656s" Jan 27 05:40:07.745284 containerd[1684]: time="2026-01-27T05:40:07.745279603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 27 05:40:07.745831 containerd[1684]: time="2026-01-27T05:40:07.745807185Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 27 05:40:08.554276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719211795.mount: Deactivated successfully. Jan 27 05:40:09.258437 containerd[1684]: time="2026-01-27T05:40:09.258385287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.259567 containerd[1684]: time="2026-01-27T05:40:09.259546910Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Jan 27 05:40:09.261092 containerd[1684]: time="2026-01-27T05:40:09.261071520Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.263902 containerd[1684]: time="2026-01-27T05:40:09.263874876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.264560 containerd[1684]: time="2026-01-27T05:40:09.264533488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.518701845s" Jan 27 05:40:09.264610 containerd[1684]: time="2026-01-27T05:40:09.264566990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 27 05:40:09.265081 containerd[1684]: time="2026-01-27T05:40:09.265064862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 27 05:40:09.917130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177662794.mount: Deactivated successfully. Jan 27 05:40:09.930162 containerd[1684]: time="2026-01-27T05:40:09.930083982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.933023 containerd[1684]: time="2026-01-27T05:40:09.932950590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 27 05:40:09.934776 containerd[1684]: time="2026-01-27T05:40:09.934726728Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.938568 containerd[1684]: time="2026-01-27T05:40:09.938501018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:09.939824 containerd[1684]: time="2026-01-27T05:40:09.939787504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 674.605937ms" Jan 27 05:40:09.939824 containerd[1684]: time="2026-01-27T05:40:09.939817596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 27 05:40:09.940335 containerd[1684]: time="2026-01-27T05:40:09.940288391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 27 05:40:10.682848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496756489.mount: Deactivated successfully. Jan 27 05:40:11.259606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 27 05:40:11.266075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:11.426530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:11.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:11.427590 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 27 05:40:11.427636 kernel: audit: type=1130 audit(1769492411.425:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:11.436446 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 05:40:11.501403 kubelet[2379]: E0127 05:40:11.501354 2379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 05:40:11.503478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 05:40:11.503608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 05:40:11.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:40:11.507420 systemd[1]: kubelet.service: Consumed 180ms CPU time, 111.9M memory peak. Jan 27 05:40:11.508025 kernel: audit: type=1131 audit(1769492411.503:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:40:12.569034 containerd[1684]: time="2026-01-27T05:40:12.568964542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:12.572002 containerd[1684]: time="2026-01-27T05:40:12.571955690Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 27 05:40:12.574600 containerd[1684]: time="2026-01-27T05:40:12.574554089Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:12.579685 containerd[1684]: time="2026-01-27T05:40:12.579633398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:12.580744 containerd[1684]: time="2026-01-27T05:40:12.580074192Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.639758152s" Jan 27 05:40:12.580744 containerd[1684]: time="2026-01-27T05:40:12.580100560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 27 05:40:16.328251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:16.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:16.328406 systemd[1]: kubelet.service: Consumed 180ms CPU time, 111.9M memory peak. Jan 27 05:40:16.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:16.337243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:16.340060 kernel: audit: type=1130 audit(1769492416.328:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:16.340211 kernel: audit: type=1131 audit(1769492416.328:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:16.373563 systemd[1]: Reload requested from client PID 2414 ('systemctl') (unit session-10.scope)... Jan 27 05:40:16.373578 systemd[1]: Reloading... Jan 27 05:40:16.492114 zram_generator::config[2470]: No configuration found. Jan 27 05:40:16.674627 systemd[1]: Reloading finished in 299 ms. Jan 27 05:40:16.706000 audit: BPF prog-id=63 op=LOAD Jan 27 05:40:16.706000 audit: BPF prog-id=45 op=UNLOAD Jan 27 05:40:16.711446 kernel: audit: type=1334 audit(1769492416.706:296): prog-id=63 op=LOAD Jan 27 05:40:16.712063 kernel: audit: type=1334 audit(1769492416.706:297): prog-id=45 op=UNLOAD Jan 27 05:40:16.706000 audit: BPF prog-id=64 op=LOAD Jan 27 05:40:16.714038 kernel: audit: type=1334 audit(1769492416.706:298): prog-id=64 op=LOAD Jan 27 05:40:16.706000 audit: BPF prog-id=65 op=LOAD Jan 27 05:40:16.716031 kernel: audit: type=1334 audit(1769492416.706:299): prog-id=65 op=LOAD Jan 27 05:40:16.706000 audit: BPF prog-id=46 op=UNLOAD Jan 27 05:40:16.706000 audit: BPF prog-id=47 op=UNLOAD Jan 27 05:40:16.721275 kernel: audit: type=1334 audit(1769492416.706:300): prog-id=46 op=UNLOAD Jan 27 05:40:16.721312 kernel: audit: type=1334 audit(1769492416.706:301): prog-id=47 op=UNLOAD Jan 27 05:40:16.721345 kernel: audit: type=1334 audit(1769492416.707:302): prog-id=66 op=LOAD Jan 27 05:40:16.707000 audit: BPF prog-id=66 op=LOAD Jan 27 05:40:16.724026 kernel: audit: type=1334 audit(1769492416.707:303): prog-id=58 op=UNLOAD Jan 27 05:40:16.724135 kernel: audit: type=1334 audit(1769492416.710:304): prog-id=67 op=LOAD Jan 27 05:40:16.707000 audit: BPF prog-id=58 op=UNLOAD Jan 27 05:40:16.710000 audit: BPF prog-id=67 op=LOAD Jan 27 05:40:16.710000 audit: BPF prog-id=51 op=UNLOAD Jan 27 05:40:16.726796 kernel: audit: type=1334 audit(1769492416.710:305): prog-id=51 op=UNLOAD Jan 27 05:40:16.710000 audit: BPF prog-id=68 op=LOAD Jan 27 05:40:16.711000 audit: BPF prog-id=69 op=LOAD Jan 27 05:40:16.711000 audit: BPF prog-id=52 op=UNLOAD Jan 27 05:40:16.711000 audit: BPF prog-id=53 op=UNLOAD Jan 27 05:40:16.711000 audit: BPF prog-id=70 op=LOAD Jan 27 05:40:16.715000 audit: BPF prog-id=59 op=UNLOAD Jan 27 05:40:16.715000 audit: BPF prog-id=71 op=LOAD Jan 27 05:40:16.715000 audit: BPF prog-id=54 op=UNLOAD Jan 27 05:40:16.717000 audit: BPF prog-id=72 op=LOAD Jan 27 05:40:16.717000 audit: BPF prog-id=60 op=UNLOAD Jan 27 05:40:16.717000 audit: BPF prog-id=73 op=LOAD Jan 27 05:40:16.717000 audit: BPF prog-id=74 op=LOAD Jan 27 05:40:16.717000 audit: BPF prog-id=61 op=UNLOAD Jan 27 05:40:16.717000 audit: BPF prog-id=62 op=UNLOAD Jan 27 05:40:16.718000 audit: BPF prog-id=75 op=LOAD Jan 27 05:40:16.718000 audit: BPF prog-id=76 op=LOAD Jan 27 05:40:16.718000 audit: BPF prog-id=43 op=UNLOAD Jan 27 05:40:16.718000 audit: BPF prog-id=44 op=UNLOAD Jan 27 05:40:16.719000 audit: BPF prog-id=77 op=LOAD Jan 27 05:40:16.719000 audit: BPF prog-id=55 op=UNLOAD Jan 27 05:40:16.719000 audit: BPF prog-id=78 op=LOAD Jan 27 05:40:16.719000 audit: BPF prog-id=79 op=LOAD Jan 27 05:40:16.719000 audit: BPF prog-id=56 op=UNLOAD Jan 27 05:40:16.719000 audit: BPF prog-id=57 op=UNLOAD Jan 27 05:40:16.719000 audit: BPF prog-id=80 op=LOAD Jan 27 05:40:16.719000 audit: BPF prog-id=48 op=UNLOAD Jan 27 05:40:16.720000 audit: BPF prog-id=81 op=LOAD Jan 27 05:40:16.720000 audit: BPF prog-id=82 op=LOAD Jan 27 05:40:16.720000 audit: BPF prog-id=49 op=UNLOAD Jan 27 05:40:16.720000 audit: BPF prog-id=50 op=UNLOAD Jan 27 05:40:16.739462 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 27 05:40:16.739535 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 27 05:40:16.739771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:16.739820 systemd[1]: kubelet.service: Consumed 98ms CPU time, 98.4M memory peak. Jan 27 05:40:16.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 05:40:16.743237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:16.869613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:16.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:16.875467 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 05:40:16.915829 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 05:40:16.916128 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:40:16.916230 kubelet[2514]: I0127 05:40:16.916209 2514 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 05:40:17.436759 kubelet[2514]: I0127 05:40:17.436707 2514 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 27 05:40:17.436759 kubelet[2514]: I0127 05:40:17.436739 2514 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 05:40:17.436908 kubelet[2514]: I0127 05:40:17.436794 2514 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 27 05:40:17.436908 kubelet[2514]: I0127 05:40:17.436807 2514 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 05:40:17.437188 kubelet[2514]: I0127 05:40:17.437169 2514 server.go:956] "Client rotation is on, will bootstrap in background" Jan 27 05:40:17.444725 kubelet[2514]: E0127 05:40:17.444674 2514 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.2.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 27 05:40:17.446568 kubelet[2514]: I0127 05:40:17.446465 2514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 05:40:17.451993 kubelet[2514]: I0127 05:40:17.451965 2514 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 05:40:17.455921 kubelet[2514]: I0127 05:40:17.455897 2514 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 27 05:40:17.456219 kubelet[2514]: I0127 05:40:17.456163 2514 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 05:40:17.456493 kubelet[2514]: I0127 05:40:17.456207 2514 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4592-0-0-n-5ca0d578df","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 05:40:17.456603 kubelet[2514]: I0127 05:40:17.456498 2514 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 05:40:17.456603 kubelet[2514]: I0127 05:40:17.456510 2514 container_manager_linux.go:306] "Creating device plugin manager" Jan 27 05:40:17.456644 kubelet[2514]: I0127 05:40:17.456616 2514 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 27 05:40:17.470486 kubelet[2514]: I0127 05:40:17.470444 2514 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:40:17.470695 kubelet[2514]: I0127 05:40:17.470679 2514 kubelet.go:475] "Attempting to sync node with API server" Jan 27 05:40:17.470725 kubelet[2514]: I0127 05:40:17.470700 2514 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 05:40:17.470725 kubelet[2514]: I0127 05:40:17.470722 2514 kubelet.go:387] "Adding apiserver pod source" Jan 27 05:40:17.470763 kubelet[2514]: I0127 05:40:17.470742 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 05:40:17.475829 kubelet[2514]: E0127 05:40:17.475774 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.2.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 27 05:40:17.476189 kubelet[2514]: E0127 05:40:17.476144 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.2.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4592-0-0-n-5ca0d578df&limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 27 05:40:17.476993 kubelet[2514]: I0127 05:40:17.476489 2514 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 05:40:17.477059 kubelet[2514]: I0127 05:40:17.476996 2514 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 27 05:40:17.477059 kubelet[2514]: I0127 05:40:17.477038 2514 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 27 05:40:17.477099 kubelet[2514]: W0127 05:40:17.477089 2514 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 27 05:40:17.481528 kubelet[2514]: I0127 05:40:17.481507 2514 server.go:1262] "Started kubelet" Jan 27 05:40:17.483269 kubelet[2514]: I0127 05:40:17.483130 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 05:40:17.488725 kubelet[2514]: E0127 05:40:17.487116 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.2.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.2.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4592-0-0-n-5ca0d578df.188e7ffa80badfcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4592-0-0-n-5ca0d578df,UID:ci-4592-0-0-n-5ca0d578df,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-5ca0d578df,},FirstTimestamp:2026-01-27 05:40:17.481465804 +0000 UTC m=+0.601295739,LastTimestamp:2026-01-27 05:40:17.481465804 +0000 UTC m=+0.601295739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-5ca0d578df,}" Jan 27 05:40:17.489062 kubelet[2514]: I0127 05:40:17.489039 2514 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 05:40:17.490675 kubelet[2514]: I0127 05:40:17.490654 2514 server.go:310] "Adding debug handlers to kubelet server" Jan 27 05:40:17.490000 audit[2528]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.490000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdfaf38b00 a2=0 a3=0 items=0 ppid=2514 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 05:40:17.491000 audit[2530]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.491000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc87a06890 a2=0 a3=0 items=0 ppid=2514 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 05:40:17.494126 kubelet[2514]: I0127 05:40:17.494115 2514 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 27 05:40:17.494263 kubelet[2514]: I0127 05:40:17.494236 2514 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 05:40:17.494299 kubelet[2514]: I0127 05:40:17.494282 2514 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 27 05:40:17.494422 kubelet[2514]: I0127 05:40:17.494411 2514 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 05:40:17.494455 kubelet[2514]: E0127 05:40:17.494412 2514 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:17.494647 kubelet[2514]: I0127 05:40:17.494604 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 05:40:17.494000 audit[2533]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.494000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe1f7d8930 a2=0 a3=0 items=0 ppid=2514 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:40:17.497882 kubelet[2514]: I0127 05:40:17.497870 2514 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 05:40:17.497000 audit[2535]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.497000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc4b014d40 a2=0 a3=0 items=0 ppid=2514 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:40:17.499123 kubelet[2514]: E0127 05:40:17.497887 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-5ca0d578df?timeout=10s\": dial tcp 10.0.2.139:6443: connect: connection refused" interval="200ms" Jan 27 05:40:17.499123 kubelet[2514]: I0127 05:40:17.498129 2514 reconciler.go:29] "Reconciler: start to sync state" Jan 27 05:40:17.499123 kubelet[2514]: E0127 05:40:17.498888 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.2.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 27 05:40:17.499543 kubelet[2514]: I0127 05:40:17.499507 2514 factory.go:223] Registration of the systemd container factory successfully Jan 27 05:40:17.499670 kubelet[2514]: I0127 05:40:17.499659 2514 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 05:40:17.500875 kubelet[2514]: E0127 05:40:17.500863 2514 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 05:40:17.501109 kubelet[2514]: I0127 05:40:17.501101 2514 factory.go:223] Registration of the containerd container factory successfully Jan 27 05:40:17.506000 audit[2538]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.506000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd6cc34cc0 a2=0 a3=0 items=0 ppid=2514 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 27 05:40:17.508094 kubelet[2514]: I0127 05:40:17.508056 2514 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 27 05:40:17.507000 audit[2540]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:17.507000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffff8620940 a2=0 a3=0 items=0 ppid=2514 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 05:40:17.509122 kubelet[2514]: I0127 05:40:17.509107 2514 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 27 05:40:17.509151 kubelet[2514]: I0127 05:40:17.509123 2514 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 27 05:40:17.509151 kubelet[2514]: I0127 05:40:17.509147 2514 kubelet.go:2427] "Starting kubelet main sync loop" Jan 27 05:40:17.509198 kubelet[2514]: E0127 05:40:17.509183 2514 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 05:40:17.509000 audit[2541]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.509000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd0a48bc0 a2=0 a3=0 items=0 ppid=2514 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 05:40:17.510000 audit[2542]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.510000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe27b430a0 a2=0 a3=0 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 05:40:17.511000 audit[2543]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:17.511000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff73f47e20 a2=0 a3=0 items=0 ppid=2514 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 05:40:17.512000 audit[2544]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:17.512000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7fcc75f0 a2=0 a3=0 items=0 ppid=2514 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 05:40:17.513000 audit[2545]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:17.513000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec9cc0510 a2=0 a3=0 items=0 ppid=2514 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 05:40:17.514000 audit[2546]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:17.514000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0bb28190 a2=0 a3=0 items=0 ppid=2514 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:17.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 05:40:17.516032 kubelet[2514]: E0127 05:40:17.515892 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.2.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 27 05:40:17.524284 kubelet[2514]: I0127 05:40:17.524267 2514 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 05:40:17.524284 kubelet[2514]: I0127 05:40:17.524280 2514 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 05:40:17.524429 kubelet[2514]: I0127 05:40:17.524293 2514 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:40:17.526964 kubelet[2514]: I0127 05:40:17.526948 2514 policy_none.go:49] "None policy: Start" Jan 27 05:40:17.527031 kubelet[2514]: I0127 05:40:17.526967 2514 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 27 05:40:17.527031 kubelet[2514]: I0127 05:40:17.526981 2514 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 27 05:40:17.529233 kubelet[2514]: I0127 05:40:17.529223 2514 policy_none.go:47] "Start" Jan 27 05:40:17.533003 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 27 05:40:17.548131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 27 05:40:17.551250 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 27 05:40:17.570914 kubelet[2514]: E0127 05:40:17.570894 2514 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 27 05:40:17.571443 kubelet[2514]: I0127 05:40:17.571433 2514 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 05:40:17.571503 kubelet[2514]: I0127 05:40:17.571445 2514 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 05:40:17.571678 kubelet[2514]: I0127 05:40:17.571635 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 05:40:17.572642 kubelet[2514]: E0127 05:40:17.572595 2514 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 05:40:17.572642 kubelet[2514]: E0127 05:40:17.572624 2514 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:17.622956 systemd[1]: Created slice kubepods-burstable-podf985ef540eeec677d82e7db7986d6cf3.slice - libcontainer container kubepods-burstable-podf985ef540eeec677d82e7db7986d6cf3.slice. Jan 27 05:40:17.631924 kubelet[2514]: E0127 05:40:17.631766 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.635733 systemd[1]: Created slice kubepods-burstable-poddb94b485dac73bdbda62de4d33b85915.slice - libcontainer container kubepods-burstable-poddb94b485dac73bdbda62de4d33b85915.slice. Jan 27 05:40:17.648421 kubelet[2514]: E0127 05:40:17.648274 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.651038 systemd[1]: Created slice kubepods-burstable-pod631be04dc40bc00d02b763d9e61ebadd.slice - libcontainer container kubepods-burstable-pod631be04dc40bc00d02b763d9e61ebadd.slice. Jan 27 05:40:17.652681 kubelet[2514]: E0127 05:40:17.652666 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.673564 kubelet[2514]: I0127 05:40:17.673547 2514 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.674149 kubelet[2514]: E0127 05:40:17.674126 2514 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.139:6443/api/v1/nodes\": dial tcp 10.0.2.139:6443: connect: connection refused" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.700036 kubelet[2514]: E0127 05:40:17.699035 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-5ca0d578df?timeout=10s\": dial tcp 10.0.2.139:6443: connect: connection refused" interval="400ms" Jan 27 05:40:17.700410 kubelet[2514]: I0127 05:40:17.700286 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/631be04dc40bc00d02b763d9e61ebadd-kubeconfig\") pod \"kube-scheduler-ci-4592-0-0-n-5ca0d578df\" (UID: \"631be04dc40bc00d02b763d9e61ebadd\") " pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.700591 kubelet[2514]: I0127 05:40:17.700526 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-ca-certs\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.700709 kubelet[2514]: I0127 05:40:17.700653 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-k8s-certs\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.700827 kubelet[2514]: I0127 05:40:17.700772 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.701148 kubelet[2514]: I0127 05:40:17.700887 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-ca-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.701148 kubelet[2514]: I0127 05:40:17.700905 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-flexvolume-dir\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.701148 kubelet[2514]: I0127 05:40:17.700927 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-k8s-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.701148 kubelet[2514]: I0127 05:40:17.700941 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-kubeconfig\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.701148 kubelet[2514]: I0127 05:40:17.700956 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.877370 kubelet[2514]: I0127 05:40:17.877283 2514 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.877949 kubelet[2514]: E0127 05:40:17.877888 2514 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.139:6443/api/v1/nodes\": dial tcp 10.0.2.139:6443: connect: connection refused" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:17.938343 containerd[1684]: time="2026-01-27T05:40:17.938074701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4592-0-0-n-5ca0d578df,Uid:f985ef540eeec677d82e7db7986d6cf3,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:17.952921 containerd[1684]: time="2026-01-27T05:40:17.952806011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4592-0-0-n-5ca0d578df,Uid:db94b485dac73bdbda62de4d33b85915,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:17.957827 containerd[1684]: time="2026-01-27T05:40:17.957790056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4592-0-0-n-5ca0d578df,Uid:631be04dc40bc00d02b763d9e61ebadd,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:18.101671 kubelet[2514]: E0127 05:40:18.101602 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-5ca0d578df?timeout=10s\": dial tcp 10.0.2.139:6443: connect: connection refused" interval="800ms" Jan 27 05:40:18.280711 kubelet[2514]: I0127 05:40:18.280654 2514 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:18.281284 kubelet[2514]: E0127 05:40:18.281247 2514 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.139:6443/api/v1/nodes\": dial tcp 10.0.2.139:6443: connect: connection refused" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:18.573354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426760958.mount: Deactivated successfully. Jan 27 05:40:18.587863 containerd[1684]: time="2026-01-27T05:40:18.587797786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:40:18.594186 containerd[1684]: time="2026-01-27T05:40:18.594098601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 05:40:18.595326 containerd[1684]: time="2026-01-27T05:40:18.595228824Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:40:18.598762 containerd[1684]: time="2026-01-27T05:40:18.598073179Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:40:18.600843 kubelet[2514]: E0127 05:40:18.600783 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.2.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4592-0-0-n-5ca0d578df&limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 27 05:40:18.601147 containerd[1684]: time="2026-01-27T05:40:18.601099183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:40:18.602523 containerd[1684]: time="2026-01-27T05:40:18.602470171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 05:40:18.605508 containerd[1684]: time="2026-01-27T05:40:18.605462495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 05:40:18.608022 containerd[1684]: time="2026-01-27T05:40:18.607405147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 05:40:18.609736 containerd[1684]: time="2026-01-27T05:40:18.609372178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 651.056823ms" Jan 27 05:40:18.614111 containerd[1684]: time="2026-01-27T05:40:18.614056563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 649.707286ms" Jan 27 05:40:18.615959 containerd[1684]: time="2026-01-27T05:40:18.615932411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 673.065158ms" Jan 27 05:40:18.655753 containerd[1684]: time="2026-01-27T05:40:18.655710235Z" level=info msg="connecting to shim 72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6" address="unix:///run/containerd/s/00b760e8df235cdc39cef2e39402448d404b0f8d31e49215b471f19842bb666e" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:18.697542 kubelet[2514]: E0127 05:40:18.696830 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.2.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 27 05:40:18.702403 containerd[1684]: time="2026-01-27T05:40:18.702366792Z" level=info msg="connecting to shim 371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f" address="unix:///run/containerd/s/06347f42e3839c6963c588c07c51c772418a972d3bfdabf3830cc32683dbe4e2" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:18.703481 containerd[1684]: time="2026-01-27T05:40:18.703447579Z" level=info msg="connecting to shim d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845" address="unix:///run/containerd/s/79e2160fa7294fc513102a87e507973b708b698f4b63f66045cac859fbeb4e3c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:18.717315 systemd[1]: Started cri-containerd-72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6.scope - libcontainer container 72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6. Jan 27 05:40:18.723164 kubelet[2514]: E0127 05:40:18.723137 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.2.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 27 05:40:18.736151 systemd[1]: Started cri-containerd-d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845.scope - libcontainer container d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845. Jan 27 05:40:18.741915 systemd[1]: Started cri-containerd-371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f.scope - libcontainer container 371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f. Jan 27 05:40:18.745000 audit: BPF prog-id=83 op=LOAD Jan 27 05:40:18.745000 audit: BPF prog-id=84 op=LOAD Jan 27 05:40:18.745000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.745000 audit: BPF prog-id=84 op=UNLOAD Jan 27 05:40:18.745000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.746000 audit: BPF prog-id=85 op=LOAD Jan 27 05:40:18.746000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.746000 audit: BPF prog-id=86 op=LOAD Jan 27 05:40:18.746000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.746000 audit: BPF prog-id=86 op=UNLOAD Jan 27 05:40:18.746000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.746000 audit: BPF prog-id=85 op=UNLOAD Jan 27 05:40:18.746000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.746000 audit: BPF prog-id=87 op=LOAD Jan 27 05:40:18.746000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2560 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643030303566633363336437646439386437653531303330663766 Jan 27 05:40:18.755000 audit: BPF prog-id=88 op=LOAD Jan 27 05:40:18.755000 audit: BPF prog-id=89 op=LOAD Jan 27 05:40:18.755000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.755000 audit: BPF prog-id=89 op=UNLOAD Jan 27 05:40:18.755000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.756000 audit: BPF prog-id=90 op=LOAD Jan 27 05:40:18.756000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.756000 audit: BPF prog-id=91 op=LOAD Jan 27 05:40:18.756000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.756000 audit: BPF prog-id=91 op=UNLOAD Jan 27 05:40:18.756000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.756000 audit: BPF prog-id=90 op=UNLOAD Jan 27 05:40:18.756000 audit[2617]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.757000 audit: BPF prog-id=92 op=LOAD Jan 27 05:40:18.757000 audit[2617]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2593 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336532316634656562303631383831306537393732653337363164 Jan 27 05:40:18.760000 audit: BPF prog-id=93 op=LOAD Jan 27 05:40:18.762000 audit: BPF prog-id=94 op=LOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=94 op=UNLOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=95 op=LOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=96 op=LOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=96 op=UNLOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=95 op=UNLOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.762000 audit: BPF prog-id=97 op=LOAD Jan 27 05:40:18.762000 audit[2630]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2596 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337316161613931643064363262383965316330333931643262383935 Jan 27 05:40:18.809384 containerd[1684]: time="2026-01-27T05:40:18.809238831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4592-0-0-n-5ca0d578df,Uid:db94b485dac73bdbda62de4d33b85915,Namespace:kube-system,Attempt:0,} returns sandbox id \"72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6\"" Jan 27 05:40:18.816094 containerd[1684]: time="2026-01-27T05:40:18.815998663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4592-0-0-n-5ca0d578df,Uid:631be04dc40bc00d02b763d9e61ebadd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845\"" Jan 27 05:40:18.819303 containerd[1684]: time="2026-01-27T05:40:18.819282713Z" level=info msg="CreateContainer within sandbox \"72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 27 05:40:18.822981 containerd[1684]: time="2026-01-27T05:40:18.822962094Z" level=info msg="CreateContainer within sandbox \"d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 27 05:40:18.836181 containerd[1684]: time="2026-01-27T05:40:18.836083183Z" level=info msg="Container 428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:18.838984 containerd[1684]: time="2026-01-27T05:40:18.838907133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4592-0-0-n-5ca0d578df,Uid:f985ef540eeec677d82e7db7986d6cf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f\"" Jan 27 05:40:18.843938 containerd[1684]: time="2026-01-27T05:40:18.843900000Z" level=info msg="CreateContainer within sandbox \"371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 27 05:40:18.844776 containerd[1684]: time="2026-01-27T05:40:18.844663383Z" level=info msg="Container 8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:18.851736 containerd[1684]: time="2026-01-27T05:40:18.851704928Z" level=info msg="CreateContainer within sandbox \"72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc\"" Jan 27 05:40:18.852343 containerd[1684]: time="2026-01-27T05:40:18.852314714Z" level=info msg="StartContainer for \"428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc\"" Jan 27 05:40:18.854422 containerd[1684]: time="2026-01-27T05:40:18.854386293Z" level=info msg="connecting to shim 428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc" address="unix:///run/containerd/s/00b760e8df235cdc39cef2e39402448d404b0f8d31e49215b471f19842bb666e" protocol=ttrpc version=3 Jan 27 05:40:18.869031 containerd[1684]: time="2026-01-27T05:40:18.868770079Z" level=info msg="CreateContainer within sandbox \"d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad\"" Jan 27 05:40:18.870256 containerd[1684]: time="2026-01-27T05:40:18.869649215Z" level=info msg="StartContainer for \"8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad\"" Jan 27 05:40:18.870256 containerd[1684]: time="2026-01-27T05:40:18.869650791Z" level=info msg="Container d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:18.870821 containerd[1684]: time="2026-01-27T05:40:18.870802154Z" level=info msg="connecting to shim 8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad" address="unix:///run/containerd/s/79e2160fa7294fc513102a87e507973b708b698f4b63f66045cac859fbeb4e3c" protocol=ttrpc version=3 Jan 27 05:40:18.871191 systemd[1]: Started cri-containerd-428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc.scope - libcontainer container 428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc. Jan 27 05:40:18.887979 kubelet[2514]: E0127 05:40:18.887950 2514 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.2.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.2.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 27 05:40:18.890741 containerd[1684]: time="2026-01-27T05:40:18.890713663Z" level=info msg="CreateContainer within sandbox \"371aaa91d0d62b89e1c0391d2b895e61013ffba04b3b5a09f45ca845695d2f5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d\"" Jan 27 05:40:18.891471 containerd[1684]: time="2026-01-27T05:40:18.891454814Z" level=info msg="StartContainer for \"d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d\"" Jan 27 05:40:18.892847 containerd[1684]: time="2026-01-27T05:40:18.892528659Z" level=info msg="connecting to shim d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d" address="unix:///run/containerd/s/06347f42e3839c6963c588c07c51c772418a972d3bfdabf3830cc32683dbe4e2" protocol=ttrpc version=3 Jan 27 05:40:18.893198 systemd[1]: Started cri-containerd-8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad.scope - libcontainer container 8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad. Jan 27 05:40:18.894000 audit: BPF prog-id=98 op=LOAD Jan 27 05:40:18.895000 audit: BPF prog-id=99 op=LOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=99 op=UNLOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=100 op=LOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=101 op=LOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=101 op=UNLOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=100 op=UNLOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.895000 audit: BPF prog-id=102 op=LOAD Jan 27 05:40:18.895000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2560 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432383930386139336463336565323236353737643866666531663661 Jan 27 05:40:18.902380 kubelet[2514]: E0127 05:40:18.902342 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-5ca0d578df?timeout=10s\": dial tcp 10.0.2.139:6443: connect: connection refused" interval="1.6s" Jan 27 05:40:18.915173 systemd[1]: Started cri-containerd-d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d.scope - libcontainer container d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d. Jan 27 05:40:18.916000 audit: BPF prog-id=103 op=LOAD Jan 27 05:40:18.917000 audit: BPF prog-id=104 op=LOAD Jan 27 05:40:18.917000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.917000 audit: BPF prog-id=104 op=UNLOAD Jan 27 05:40:18.917000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.917000 audit: BPF prog-id=105 op=LOAD Jan 27 05:40:18.917000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.917000 audit: BPF prog-id=106 op=LOAD Jan 27 05:40:18.917000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.917000 audit: BPF prog-id=106 op=UNLOAD Jan 27 05:40:18.917000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.920000 audit: BPF prog-id=105 op=UNLOAD Jan 27 05:40:18.920000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.920000 audit: BPF prog-id=107 op=LOAD Jan 27 05:40:18.920000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2593 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863326666663030366236393766326164376334313666393666666230 Jan 27 05:40:18.927000 audit: BPF prog-id=108 op=LOAD Jan 27 05:40:18.927000 audit: BPF prog-id=109 op=LOAD Jan 27 05:40:18.927000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230238 a2=98 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.927000 audit: BPF prog-id=109 op=UNLOAD Jan 27 05:40:18.927000 audit[2724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.928000 audit: BPF prog-id=110 op=LOAD Jan 27 05:40:18.928000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000230488 a2=98 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.928000 audit: BPF prog-id=111 op=LOAD Jan 27 05:40:18.928000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000230218 a2=98 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.928000 audit: BPF prog-id=111 op=UNLOAD Jan 27 05:40:18.928000 audit[2724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.928000 audit: BPF prog-id=110 op=UNLOAD Jan 27 05:40:18.928000 audit[2724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.928000 audit: BPF prog-id=112 op=LOAD Jan 27 05:40:18.928000 audit[2724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002306e8 a2=98 a3=0 items=0 ppid=2596 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:18.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432356364346364386533636335346363383162643662623935333764 Jan 27 05:40:18.963946 containerd[1684]: time="2026-01-27T05:40:18.963838279Z" level=info msg="StartContainer for \"428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc\" returns successfully" Jan 27 05:40:18.998556 containerd[1684]: time="2026-01-27T05:40:18.998473100Z" level=info msg="StartContainer for \"8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad\" returns successfully" Jan 27 05:40:19.000030 containerd[1684]: time="2026-01-27T05:40:18.999148866Z" level=info msg="StartContainer for \"d25cd4cd8e3cc54cc81bd6bb9537d3227ba8ced597a0b6efd1446229316f100d\" returns successfully" Jan 27 05:40:19.082759 kubelet[2514]: I0127 05:40:19.082739 2514 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:19.530329 kubelet[2514]: E0127 05:40:19.530235 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:19.536712 kubelet[2514]: E0127 05:40:19.536604 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:19.543292 kubelet[2514]: E0127 05:40:19.543275 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:20.530604 kubelet[2514]: E0127 05:40:20.530548 2514 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:20.542863 kubelet[2514]: E0127 05:40:20.542737 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:20.544157 kubelet[2514]: E0127 05:40:20.542798 2514 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:20.566458 kubelet[2514]: E0127 05:40:20.566268 2514 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4592-0-0-n-5ca0d578df.188e7ffa80badfcc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4592-0-0-n-5ca0d578df,UID:ci-4592-0-0-n-5ca0d578df,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-5ca0d578df,},FirstTimestamp:2026-01-27 05:40:17.481465804 +0000 UTC m=+0.601295739,LastTimestamp:2026-01-27 05:40:17.481465804 +0000 UTC m=+0.601295739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-5ca0d578df,}" Jan 27 05:40:20.620159 kubelet[2514]: E0127 05:40:20.619942 2514 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4592-0-0-n-5ca0d578df.188e7ffa81e29f0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4592-0-0-n-5ca0d578df,UID:ci-4592-0-0-n-5ca0d578df,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-5ca0d578df,},FirstTimestamp:2026-01-27 05:40:17.500847885 +0000 UTC m=+0.620677837,LastTimestamp:2026-01-27 05:40:17.500847885 +0000 UTC m=+0.620677837,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-5ca0d578df,}" Jan 27 05:40:20.638313 kubelet[2514]: I0127 05:40:20.638188 2514 kubelet_node_status.go:78] "Successfully registered node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:20.638313 kubelet[2514]: E0127 05:40:20.638215 2514 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4592-0-0-n-5ca0d578df\": node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:20.680267 kubelet[2514]: E0127 05:40:20.680233 2514 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:20.686932 kubelet[2514]: E0127 05:40:20.686729 2514 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4592-0-0-n-5ca0d578df.188e7ffa8335e283 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4592-0-0-n-5ca0d578df,UID:ci-4592-0-0-n-5ca0d578df,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4592-0-0-n-5ca0d578df status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4592-0-0-n-5ca0d578df,},FirstTimestamp:2026-01-27 05:40:17.523081859 +0000 UTC m=+0.642911807,LastTimestamp:2026-01-27 05:40:17.523081859 +0000 UTC m=+0.642911807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4592-0-0-n-5ca0d578df,}" Jan 27 05:40:20.780410 kubelet[2514]: E0127 05:40:20.780368 2514 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:20.881346 kubelet[2514]: E0127 05:40:20.881247 2514 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:20.982033 kubelet[2514]: E0127 05:40:20.981913 2514 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:21.096081 kubelet[2514]: I0127 05:40:21.095702 2514 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.101352 kubelet[2514]: E0127 05:40:21.101304 2514 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.102203 kubelet[2514]: I0127 05:40:21.101523 2514 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.103776 kubelet[2514]: E0127 05:40:21.103752 2514 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.104047 kubelet[2514]: I0127 05:40:21.103881 2514 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.105938 kubelet[2514]: E0127 05:40:21.105920 2514 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4592-0-0-n-5ca0d578df\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:21.478405 kubelet[2514]: I0127 05:40:21.478353 2514 apiserver.go:52] "Watching apiserver" Jan 27 05:40:21.499317 kubelet[2514]: I0127 05:40:21.499128 2514 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 05:40:22.361221 systemd[1]: Reload requested from client PID 2796 ('systemctl') (unit session-10.scope)... Jan 27 05:40:22.361238 systemd[1]: Reloading... Jan 27 05:40:22.456067 zram_generator::config[2842]: No configuration found. Jan 27 05:40:22.659685 systemd[1]: Reloading finished in 298 ms. Jan 27 05:40:22.681988 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:22.694851 systemd[1]: kubelet.service: Deactivated successfully. Jan 27 05:40:22.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:22.695257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:22.696272 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 27 05:40:22.696306 kernel: audit: type=1131 audit(1769492422.694:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:22.698861 systemd[1]: kubelet.service: Consumed 901ms CPU time, 124M memory peak. Jan 27 05:40:22.700520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 05:40:22.702000 audit: BPF prog-id=113 op=LOAD Jan 27 05:40:22.705466 kernel: audit: type=1334 audit(1769492422.702:399): prog-id=113 op=LOAD Jan 27 05:40:22.705514 kernel: audit: type=1334 audit(1769492422.702:400): prog-id=70 op=UNLOAD Jan 27 05:40:22.702000 audit: BPF prog-id=70 op=UNLOAD Jan 27 05:40:22.706598 kernel: audit: type=1334 audit(1769492422.702:401): prog-id=114 op=LOAD Jan 27 05:40:22.702000 audit: BPF prog-id=114 op=LOAD Jan 27 05:40:22.709696 kernel: audit: type=1334 audit(1769492422.702:402): prog-id=63 op=UNLOAD Jan 27 05:40:22.702000 audit: BPF prog-id=63 op=UNLOAD Jan 27 05:40:22.702000 audit: BPF prog-id=115 op=LOAD Jan 27 05:40:22.702000 audit: BPF prog-id=116 op=LOAD Jan 27 05:40:22.702000 audit: BPF prog-id=64 op=UNLOAD Jan 27 05:40:22.702000 audit: BPF prog-id=65 op=UNLOAD Jan 27 05:40:22.705000 audit: BPF prog-id=117 op=LOAD Jan 27 05:40:22.705000 audit: BPF prog-id=66 op=UNLOAD Jan 27 05:40:22.706000 audit: BPF prog-id=118 op=LOAD Jan 27 05:40:22.706000 audit: BPF prog-id=80 op=UNLOAD Jan 27 05:40:22.706000 audit: BPF prog-id=119 op=LOAD Jan 27 05:40:22.710039 kernel: audit: type=1334 audit(1769492422.702:403): prog-id=115 op=LOAD Jan 27 05:40:22.710053 kernel: audit: type=1334 audit(1769492422.702:404): prog-id=116 op=LOAD Jan 27 05:40:22.710069 kernel: audit: type=1334 audit(1769492422.702:405): prog-id=64 op=UNLOAD Jan 27 05:40:22.710083 kernel: audit: type=1334 audit(1769492422.702:406): prog-id=65 op=UNLOAD Jan 27 05:40:22.710098 kernel: audit: type=1334 audit(1769492422.705:407): prog-id=117 op=LOAD Jan 27 05:40:22.706000 audit: BPF prog-id=120 op=LOAD Jan 27 05:40:22.706000 audit: BPF prog-id=81 op=UNLOAD Jan 27 05:40:22.706000 audit: BPF prog-id=82 op=UNLOAD Jan 27 05:40:22.706000 audit: BPF prog-id=121 op=LOAD Jan 27 05:40:22.707000 audit: BPF prog-id=122 op=LOAD Jan 27 05:40:22.707000 audit: BPF prog-id=75 op=UNLOAD Jan 27 05:40:22.707000 audit: BPF prog-id=76 op=UNLOAD Jan 27 05:40:22.708000 audit: BPF prog-id=123 op=LOAD Jan 27 05:40:22.708000 audit: BPF prog-id=72 op=UNLOAD Jan 27 05:40:22.708000 audit: BPF prog-id=124 op=LOAD Jan 27 05:40:22.708000 audit: BPF prog-id=125 op=LOAD Jan 27 05:40:22.708000 audit: BPF prog-id=73 op=UNLOAD Jan 27 05:40:22.708000 audit: BPF prog-id=74 op=UNLOAD Jan 27 05:40:22.708000 audit: BPF prog-id=126 op=LOAD Jan 27 05:40:22.708000 audit: BPF prog-id=71 op=UNLOAD Jan 27 05:40:22.710000 audit: BPF prog-id=127 op=LOAD Jan 27 05:40:22.710000 audit: BPF prog-id=67 op=UNLOAD Jan 27 05:40:22.710000 audit: BPF prog-id=128 op=LOAD Jan 27 05:40:22.710000 audit: BPF prog-id=129 op=LOAD Jan 27 05:40:22.710000 audit: BPF prog-id=68 op=UNLOAD Jan 27 05:40:22.710000 audit: BPF prog-id=69 op=UNLOAD Jan 27 05:40:22.711000 audit: BPF prog-id=130 op=LOAD Jan 27 05:40:22.711000 audit: BPF prog-id=77 op=UNLOAD Jan 27 05:40:22.711000 audit: BPF prog-id=131 op=LOAD Jan 27 05:40:22.711000 audit: BPF prog-id=132 op=LOAD Jan 27 05:40:22.711000 audit: BPF prog-id=78 op=UNLOAD Jan 27 05:40:22.711000 audit: BPF prog-id=79 op=UNLOAD Jan 27 05:40:22.854212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 05:40:22.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:22.864344 (kubelet)[2893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 05:40:22.918211 kubelet[2893]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 05:40:22.918211 kubelet[2893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 05:40:22.918211 kubelet[2893]: I0127 05:40:22.917906 2893 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 05:40:22.925733 kubelet[2893]: I0127 05:40:22.925705 2893 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 27 05:40:22.925733 kubelet[2893]: I0127 05:40:22.925725 2893 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 05:40:22.925900 kubelet[2893]: I0127 05:40:22.925748 2893 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 27 05:40:22.925900 kubelet[2893]: I0127 05:40:22.925755 2893 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 05:40:22.926210 kubelet[2893]: I0127 05:40:22.926064 2893 server.go:956] "Client rotation is on, will bootstrap in background" Jan 27 05:40:22.928197 kubelet[2893]: I0127 05:40:22.928149 2893 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 27 05:40:22.930669 kubelet[2893]: I0127 05:40:22.930567 2893 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 05:40:22.937301 kubelet[2893]: I0127 05:40:22.937271 2893 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 05:40:22.940784 kubelet[2893]: I0127 05:40:22.940761 2893 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 27 05:40:22.941951 kubelet[2893]: I0127 05:40:22.941422 2893 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 05:40:22.941951 kubelet[2893]: I0127 05:40:22.941451 2893 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4592-0-0-n-5ca0d578df","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 05:40:22.941951 kubelet[2893]: I0127 05:40:22.941588 2893 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 05:40:22.941951 kubelet[2893]: I0127 05:40:22.941597 2893 container_manager_linux.go:306] "Creating device plugin manager" Jan 27 05:40:22.942174 kubelet[2893]: I0127 05:40:22.941616 2893 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 27 05:40:22.946260 kubelet[2893]: I0127 05:40:22.946240 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:40:22.946484 kubelet[2893]: I0127 05:40:22.946476 2893 kubelet.go:475] "Attempting to sync node with API server" Jan 27 05:40:22.946533 kubelet[2893]: I0127 05:40:22.946528 2893 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 05:40:22.946579 kubelet[2893]: I0127 05:40:22.946575 2893 kubelet.go:387] "Adding apiserver pod source" Jan 27 05:40:22.948149 kubelet[2893]: I0127 05:40:22.948137 2893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 05:40:22.949109 kubelet[2893]: I0127 05:40:22.949097 2893 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 05:40:22.949599 kubelet[2893]: I0127 05:40:22.949589 2893 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 27 05:40:22.949663 kubelet[2893]: I0127 05:40:22.949658 2893 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 27 05:40:22.952186 kubelet[2893]: I0127 05:40:22.952176 2893 server.go:1262] "Started kubelet" Jan 27 05:40:22.953586 kubelet[2893]: I0127 05:40:22.953572 2893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 05:40:22.960683 kubelet[2893]: I0127 05:40:22.960654 2893 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 05:40:22.961575 kubelet[2893]: I0127 05:40:22.961561 2893 server.go:310] "Adding debug handlers to kubelet server" Jan 27 05:40:22.964176 kubelet[2893]: I0127 05:40:22.964147 2893 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 05:40:22.964271 kubelet[2893]: I0127 05:40:22.964263 2893 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 27 05:40:22.964418 kubelet[2893]: I0127 05:40:22.964409 2893 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 05:40:22.964687 kubelet[2893]: I0127 05:40:22.964676 2893 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 05:40:22.967249 kubelet[2893]: I0127 05:40:22.967239 2893 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 27 05:40:22.967459 kubelet[2893]: E0127 05:40:22.967448 2893 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4592-0-0-n-5ca0d578df\" not found" Jan 27 05:40:22.969311 kubelet[2893]: I0127 05:40:22.969299 2893 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 05:40:22.969462 kubelet[2893]: I0127 05:40:22.969455 2893 reconciler.go:29] "Reconciler: start to sync state" Jan 27 05:40:22.971746 kubelet[2893]: I0127 05:40:22.971723 2893 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 27 05:40:22.972675 kubelet[2893]: I0127 05:40:22.972663 2893 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 27 05:40:22.972737 kubelet[2893]: I0127 05:40:22.972731 2893 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 27 05:40:22.972790 kubelet[2893]: I0127 05:40:22.972785 2893 kubelet.go:2427] "Starting kubelet main sync loop" Jan 27 05:40:22.972861 kubelet[2893]: E0127 05:40:22.972851 2893 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 05:40:22.976350 kubelet[2893]: I0127 05:40:22.976333 2893 factory.go:223] Registration of the systemd container factory successfully Jan 27 05:40:22.976426 kubelet[2893]: I0127 05:40:22.976411 2893 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 05:40:22.979206 kubelet[2893]: I0127 05:40:22.979189 2893 factory.go:223] Registration of the containerd container factory successfully Jan 27 05:40:22.984766 kubelet[2893]: E0127 05:40:22.984742 2893 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 05:40:23.014860 kubelet[2893]: I0127 05:40:23.014834 2893 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 05:40:23.014860 kubelet[2893]: I0127 05:40:23.014848 2893 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 05:40:23.014860 kubelet[2893]: I0127 05:40:23.014865 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 27 05:40:23.015021 kubelet[2893]: I0127 05:40:23.014974 2893 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 27 05:40:23.015021 kubelet[2893]: I0127 05:40:23.014982 2893 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 27 05:40:23.015021 kubelet[2893]: I0127 05:40:23.014998 2893 policy_none.go:49] "None policy: Start" Jan 27 05:40:23.015241 kubelet[2893]: I0127 05:40:23.015222 2893 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 27 05:40:23.015241 kubelet[2893]: I0127 05:40:23.015238 2893 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 27 05:40:23.015366 kubelet[2893]: I0127 05:40:23.015350 2893 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 27 05:40:23.015366 kubelet[2893]: I0127 05:40:23.015360 2893 policy_none.go:47] "Start" Jan 27 05:40:23.019337 kubelet[2893]: E0127 05:40:23.019313 2893 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 27 05:40:23.019453 kubelet[2893]: I0127 05:40:23.019443 2893 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 05:40:23.019480 kubelet[2893]: I0127 05:40:23.019455 2893 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 05:40:23.020441 kubelet[2893]: I0127 05:40:23.020344 2893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 05:40:23.021996 kubelet[2893]: E0127 05:40:23.021980 2893 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 05:40:23.075061 kubelet[2893]: I0127 05:40:23.074302 2893 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.075061 kubelet[2893]: I0127 05:40:23.074355 2893 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.075061 kubelet[2893]: I0127 05:40:23.074528 2893 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.121403 kubelet[2893]: I0127 05:40:23.121382 2893 kubelet_node_status.go:75] "Attempting to register node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.129530 kubelet[2893]: I0127 05:40:23.129514 2893 kubelet_node_status.go:124] "Node was previously registered" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.129895 kubelet[2893]: I0127 05:40:23.129614 2893 kubelet_node_status.go:78] "Successfully registered node" node="ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.180366 update_engine[1651]: I20260127 05:40:23.180219 1651 update_attempter.cc:509] Updating boot flags... Jan 27 05:40:23.271052 kubelet[2893]: I0127 05:40:23.269978 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-ca-certs\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.271052 kubelet[2893]: I0127 05:40:23.270040 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-k8s-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.271052 kubelet[2893]: I0127 05:40:23.270061 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-kubeconfig\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272126 kubelet[2893]: I0127 05:40:23.272102 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272457 kubelet[2893]: I0127 05:40:23.272331 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/631be04dc40bc00d02b763d9e61ebadd-kubeconfig\") pod \"kube-scheduler-ci-4592-0-0-n-5ca0d578df\" (UID: \"631be04dc40bc00d02b763d9e61ebadd\") " pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272457 kubelet[2893]: I0127 05:40:23.272353 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-k8s-certs\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272457 kubelet[2893]: I0127 05:40:23.272368 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f985ef540eeec677d82e7db7986d6cf3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" (UID: \"f985ef540eeec677d82e7db7986d6cf3\") " pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272457 kubelet[2893]: I0127 05:40:23.272384 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-ca-certs\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.272457 kubelet[2893]: I0127 05:40:23.272406 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db94b485dac73bdbda62de4d33b85915-flexvolume-dir\") pod \"kube-controller-manager-ci-4592-0-0-n-5ca0d578df\" (UID: \"db94b485dac73bdbda62de4d33b85915\") " pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:23.949267 kubelet[2893]: I0127 05:40:23.949202 2893 apiserver.go:52] "Watching apiserver" Jan 27 05:40:23.970170 kubelet[2893]: I0127 05:40:23.970116 2893 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 05:40:24.004196 kubelet[2893]: I0127 05:40:24.004171 2893 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:24.010659 kubelet[2893]: E0127 05:40:24.010582 2893 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4592-0-0-n-5ca0d578df\" already exists" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" Jan 27 05:40:24.032243 kubelet[2893]: I0127 05:40:24.031692 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4592-0-0-n-5ca0d578df" podStartSLOduration=1.031677714 podStartE2EDuration="1.031677714s" podCreationTimestamp="2026-01-27 05:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:24.024512592 +0000 UTC m=+1.154420253" watchObservedRunningTime="2026-01-27 05:40:24.031677714 +0000 UTC m=+1.161585401" Jan 27 05:40:24.032243 kubelet[2893]: I0127 05:40:24.031786 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4592-0-0-n-5ca0d578df" podStartSLOduration=1.031782414 podStartE2EDuration="1.031782414s" podCreationTimestamp="2026-01-27 05:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:24.031610978 +0000 UTC m=+1.161518643" watchObservedRunningTime="2026-01-27 05:40:24.031782414 +0000 UTC m=+1.161690071" Jan 27 05:40:24.039101 kubelet[2893]: I0127 05:40:24.039060 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4592-0-0-n-5ca0d578df" podStartSLOduration=1.039046697 podStartE2EDuration="1.039046697s" podCreationTimestamp="2026-01-27 05:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:24.038512508 +0000 UTC m=+1.168420173" watchObservedRunningTime="2026-01-27 05:40:24.039046697 +0000 UTC m=+1.168954361" Jan 27 05:40:29.424517 kubelet[2893]: I0127 05:40:29.424492 2893 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 27 05:40:29.425367 kubelet[2893]: I0127 05:40:29.425116 2893 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 27 05:40:29.425544 containerd[1684]: time="2026-01-27T05:40:29.424908888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 27 05:40:30.258512 systemd[1]: Created slice kubepods-besteffort-pod84aa755d_9e6f_4c04_9f77_f36d43fd85ab.slice - libcontainer container kubepods-besteffort-pod84aa755d_9e6f_4c04_9f77_f36d43fd85ab.slice. Jan 27 05:40:30.314103 kubelet[2893]: I0127 05:40:30.314038 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84aa755d-9e6f-4c04-9f77-f36d43fd85ab-xtables-lock\") pod \"kube-proxy-mw57j\" (UID: \"84aa755d-9e6f-4c04-9f77-f36d43fd85ab\") " pod="kube-system/kube-proxy-mw57j" Jan 27 05:40:30.314103 kubelet[2893]: I0127 05:40:30.314084 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84aa755d-9e6f-4c04-9f77-f36d43fd85ab-kube-proxy\") pod \"kube-proxy-mw57j\" (UID: \"84aa755d-9e6f-4c04-9f77-f36d43fd85ab\") " pod="kube-system/kube-proxy-mw57j" Jan 27 05:40:30.314103 kubelet[2893]: I0127 05:40:30.314107 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84aa755d-9e6f-4c04-9f77-f36d43fd85ab-lib-modules\") pod \"kube-proxy-mw57j\" (UID: \"84aa755d-9e6f-4c04-9f77-f36d43fd85ab\") " pod="kube-system/kube-proxy-mw57j" Jan 27 05:40:30.314303 kubelet[2893]: I0127 05:40:30.314124 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfdm\" (UniqueName: \"kubernetes.io/projected/84aa755d-9e6f-4c04-9f77-f36d43fd85ab-kube-api-access-cxfdm\") pod \"kube-proxy-mw57j\" (UID: \"84aa755d-9e6f-4c04-9f77-f36d43fd85ab\") " pod="kube-system/kube-proxy-mw57j" Jan 27 05:40:30.570527 containerd[1684]: time="2026-01-27T05:40:30.570428981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mw57j,Uid:84aa755d-9e6f-4c04-9f77-f36d43fd85ab,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:30.604107 containerd[1684]: time="2026-01-27T05:40:30.604059752Z" level=info msg="connecting to shim db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882" address="unix:///run/containerd/s/eb135d8f849e28ad0c0f347cdaf9e84017eec6ce060aa0acfcdacab5ccec2959" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:30.630205 systemd[1]: Started cri-containerd-db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882.scope - libcontainer container db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882. Jan 27 05:40:30.667000 audit: BPF prog-id=133 op=LOAD Jan 27 05:40:30.669374 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 27 05:40:30.669473 kernel: audit: type=1334 audit(1769492430.667:440): prog-id=133 op=LOAD Jan 27 05:40:30.670000 audit: BPF prog-id=134 op=LOAD Jan 27 05:40:30.673075 kernel: audit: type=1334 audit(1769492430.670:441): prog-id=134 op=LOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.678057 kernel: audit: type=1300 audit(1769492430.670:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.684038 kernel: audit: type=1327 audit(1769492430.670:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.684980 kernel: audit: type=1334 audit(1769492430.670:442): prog-id=134 op=UNLOAD Jan 27 05:40:30.670000 audit: BPF prog-id=134 op=UNLOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.692611 kernel: audit: type=1300 audit(1769492430.670:442): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.692667 kernel: audit: type=1327 audit(1769492430.670:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: BPF prog-id=135 op=LOAD Jan 27 05:40:30.700038 kernel: audit: type=1334 audit(1769492430.670:443): prog-id=135 op=LOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.705033 kernel: audit: type=1300 audit(1769492430.670:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.705094 kernel: audit: type=1327 audit(1769492430.670:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: BPF prog-id=136 op=LOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: BPF prog-id=136 op=UNLOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: BPF prog-id=135 op=UNLOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.670000 audit: BPF prog-id=137 op=LOAD Jan 27 05:40:30.670000 audit[2979]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2968 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462323534373236643464623030363236373431326666363932646164 Jan 27 05:40:30.710461 systemd[1]: Created slice kubepods-besteffort-pod47d4fcb8_186b_48a3_bb46_d6924eacc375.slice - libcontainer container kubepods-besteffort-pod47d4fcb8_186b_48a3_bb46_d6924eacc375.slice. Jan 27 05:40:30.716523 containerd[1684]: time="2026-01-27T05:40:30.716482717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mw57j,Uid:84aa755d-9e6f-4c04-9f77-f36d43fd85ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882\"" Jan 27 05:40:30.716845 kubelet[2893]: I0127 05:40:30.716825 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47d4fcb8-186b-48a3-bb46-d6924eacc375-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-m5p7d\" (UID: \"47d4fcb8-186b-48a3-bb46-d6924eacc375\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m5p7d" Jan 27 05:40:30.717093 kubelet[2893]: I0127 05:40:30.716856 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zr5v\" (UniqueName: \"kubernetes.io/projected/47d4fcb8-186b-48a3-bb46-d6924eacc375-kube-api-access-2zr5v\") pod \"tigera-operator-65cdcdfd6d-m5p7d\" (UID: \"47d4fcb8-186b-48a3-bb46-d6924eacc375\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m5p7d" Jan 27 05:40:30.722575 containerd[1684]: time="2026-01-27T05:40:30.722534755Z" level=info msg="CreateContainer within sandbox \"db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 27 05:40:30.739506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959177118.mount: Deactivated successfully. Jan 27 05:40:30.741190 containerd[1684]: time="2026-01-27T05:40:30.740006050Z" level=info msg="Container 73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:30.750191 containerd[1684]: time="2026-01-27T05:40:30.750157459Z" level=info msg="CreateContainer within sandbox \"db254726d4db006267412ff692dad6700ddfbacea37b5d04e3db678799022882\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a\"" Jan 27 05:40:30.750891 containerd[1684]: time="2026-01-27T05:40:30.750735437Z" level=info msg="StartContainer for \"73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a\"" Jan 27 05:40:30.751983 containerd[1684]: time="2026-01-27T05:40:30.751963719Z" level=info msg="connecting to shim 73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a" address="unix:///run/containerd/s/eb135d8f849e28ad0c0f347cdaf9e84017eec6ce060aa0acfcdacab5ccec2959" protocol=ttrpc version=3 Jan 27 05:40:30.777220 systemd[1]: Started cri-containerd-73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a.scope - libcontainer container 73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a. Jan 27 05:40:30.826000 audit: BPF prog-id=138 op=LOAD Jan 27 05:40:30.826000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2968 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303236616365326262653965643637343132633235626161316262 Jan 27 05:40:30.826000 audit: BPF prog-id=139 op=LOAD Jan 27 05:40:30.826000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2968 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303236616365326262653965643637343132633235626161316262 Jan 27 05:40:30.826000 audit: BPF prog-id=139 op=UNLOAD Jan 27 05:40:30.826000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303236616365326262653965643637343132633235626161316262 Jan 27 05:40:30.826000 audit: BPF prog-id=138 op=UNLOAD Jan 27 05:40:30.826000 audit[3004]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2968 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303236616365326262653965643637343132633235626161316262 Jan 27 05:40:30.826000 audit: BPF prog-id=140 op=LOAD Jan 27 05:40:30.826000 audit[3004]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2968 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:30.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303236616365326262653965643637343132633235626161316262 Jan 27 05:40:30.847292 containerd[1684]: time="2026-01-27T05:40:30.847255707Z" level=info msg="StartContainer for \"73026ace2bbe9ed67412c25baa1bb920f3379d17cc2465ac731408f4d1afc20a\" returns successfully" Jan 27 05:40:31.016701 containerd[1684]: time="2026-01-27T05:40:31.016673433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m5p7d,Uid:47d4fcb8-186b-48a3-bb46-d6924eacc375,Namespace:tigera-operator,Attempt:0,}" Jan 27 05:40:31.048000 audit[3073]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.048000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe51519150 a2=0 a3=7ffe5151913c items=0 ppid=3018 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 05:40:31.051000 audit[3074]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.051000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff239798d0 a2=0 a3=7fff239798bc items=0 ppid=3018 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 05:40:31.054000 audit[3080]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.054000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff96ad2510 a2=0 a3=7fff96ad24fc items=0 ppid=3018 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 05:40:31.056000 audit[3082]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.056000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2530b300 a2=0 a3=7ffe2530b2ec items=0 ppid=3018 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 05:40:31.060156 containerd[1684]: time="2026-01-27T05:40:31.060084467Z" level=info msg="connecting to shim 399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4" address="unix:///run/containerd/s/16665a1858466eb826a91a7cbf7835abde715d8b4299b83d5b902434148520cd" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:31.060000 audit[3087]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.060000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5abbe6d0 a2=0 a3=7fff5abbe6bc items=0 ppid=3018 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 05:40:31.063000 audit[3088]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.063000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec8653b90 a2=0 a3=7ffec8653b7c items=0 ppid=3018 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 05:40:31.090231 systemd[1]: Started cri-containerd-399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4.scope - libcontainer container 399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4. Jan 27 05:40:31.099000 audit: BPF prog-id=141 op=LOAD Jan 27 05:40:31.099000 audit: BPF prog-id=142 op=LOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=142 op=UNLOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=143 op=LOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=144 op=LOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=144 op=UNLOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=143 op=UNLOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.099000 audit: BPF prog-id=145 op=LOAD Jan 27 05:40:31.099000 audit[3099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3086 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396535333330666562626134386162663131383730373431396239 Jan 27 05:40:31.134005 containerd[1684]: time="2026-01-27T05:40:31.133927827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m5p7d,Uid:47d4fcb8-186b-48a3-bb46-d6924eacc375,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4\"" Jan 27 05:40:31.136198 containerd[1684]: time="2026-01-27T05:40:31.136176350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 27 05:40:31.155000 audit[3123]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.155000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc6c436d20 a2=0 a3=7ffc6c436d0c items=0 ppid=3018 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 05:40:31.157000 audit[3125]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.157000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc158532f0 a2=0 a3=7ffc158532dc items=0 ppid=3018 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 27 05:40:31.161000 audit[3128]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.161000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd14914d0 a2=0 a3=7fffd14914bc items=0 ppid=3018 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 27 05:40:31.162000 audit[3129]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.162000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff72a8520 a2=0 a3=7ffff72a850c items=0 ppid=3018 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 05:40:31.165000 audit[3131]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.165000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfd860db0 a2=0 a3=7ffdfd860d9c items=0 ppid=3018 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 05:40:31.166000 audit[3132]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.166000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe644073f0 a2=0 a3=7ffe644073dc items=0 ppid=3018 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 05:40:31.169000 audit[3134]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.169000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff447e7350 a2=0 a3=7fff447e733c items=0 ppid=3018 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.172000 audit[3137]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.172000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcfdedee50 a2=0 a3=7ffcfdedee3c items=0 ppid=3018 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.173000 audit[3138]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.173000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2f8e22a0 a2=0 a3=7fff2f8e228c items=0 ppid=3018 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 05:40:31.176000 audit[3140]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.176000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc2402340 a2=0 a3=7fffc240232c items=0 ppid=3018 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 05:40:31.177000 audit[3141]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.177000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec5a823e0 a2=0 a3=7ffec5a823cc items=0 ppid=3018 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 05:40:31.179000 audit[3143]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.179000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc3037010 a2=0 a3=7fffc3036ffc items=0 ppid=3018 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 27 05:40:31.182000 audit[3146]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.182000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc64091800 a2=0 a3=7ffc640917ec items=0 ppid=3018 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 27 05:40:31.186000 audit[3149]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.186000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8afbe980 a2=0 a3=7ffe8afbe96c items=0 ppid=3018 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 27 05:40:31.187000 audit[3150]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.187000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd15111b0 a2=0 a3=7ffdd151119c items=0 ppid=3018 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 05:40:31.189000 audit[3152]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.189000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd4d16bf20 a2=0 a3=7ffd4d16bf0c items=0 ppid=3018 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.193000 audit[3155]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.193000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb247b2f0 a2=0 a3=7ffcb247b2dc items=0 ppid=3018 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.194000 audit[3156]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.194000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec7c49d00 a2=0 a3=7ffec7c49cec items=0 ppid=3018 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 05:40:31.196000 audit[3158]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 05:40:31.196000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe52a2b190 a2=0 a3=7ffe52a2b17c items=0 ppid=3018 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 05:40:31.218000 audit[3164]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:31.218000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce5004530 a2=0 a3=7ffce500451c items=0 ppid=3018 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:31.231000 audit[3164]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:31.231000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffce5004530 a2=0 a3=7ffce500451c items=0 ppid=3018 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:31.233000 audit[3169]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.233000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdc4c42e10 a2=0 a3=7ffdc4c42dfc items=0 ppid=3018 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 05:40:31.236000 audit[3171]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.236000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc2f2f8430 a2=0 a3=7ffc2f2f841c items=0 ppid=3018 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 27 05:40:31.240000 audit[3174]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.240000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7bbb7fe0 a2=0 a3=7fff7bbb7fcc items=0 ppid=3018 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 27 05:40:31.241000 audit[3175]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.241000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff007eff70 a2=0 a3=7fff007eff5c items=0 ppid=3018 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 05:40:31.244000 audit[3177]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.244000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdefca5b80 a2=0 a3=7ffdefca5b6c items=0 ppid=3018 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 05:40:31.245000 audit[3178]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.245000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce32bb220 a2=0 a3=7ffce32bb20c items=0 ppid=3018 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 05:40:31.248000 audit[3180]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.248000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd38c6fef0 a2=0 a3=7ffd38c6fedc items=0 ppid=3018 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.251000 audit[3183]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.251000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc45622ca0 a2=0 a3=7ffc45622c8c items=0 ppid=3018 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.251000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.252000 audit[3184]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.252000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccf2c9800 a2=0 a3=7ffccf2c97ec items=0 ppid=3018 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 05:40:31.255000 audit[3186]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.255000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdbbf4d7a0 a2=0 a3=7ffdbbf4d78c items=0 ppid=3018 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.255000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 05:40:31.256000 audit[3187]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.256000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff26c98150 a2=0 a3=7fff26c9813c items=0 ppid=3018 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 05:40:31.258000 audit[3189]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.258000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8ae946f0 a2=0 a3=7ffe8ae946dc items=0 ppid=3018 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 27 05:40:31.262000 audit[3192]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.262000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4d784ac0 a2=0 a3=7fff4d784aac items=0 ppid=3018 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 27 05:40:31.265000 audit[3195]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.265000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc198de590 a2=0 a3=7ffc198de57c items=0 ppid=3018 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 27 05:40:31.266000 audit[3196]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.266000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff270754f0 a2=0 a3=7fff270754dc items=0 ppid=3018 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 05:40:31.269000 audit[3198]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.269000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff6c283ec0 a2=0 a3=7fff6c283eac items=0 ppid=3018 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.272000 audit[3201]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.272000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd8ce68f0 a2=0 a3=7ffcd8ce68dc items=0 ppid=3018 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 05:40:31.273000 audit[3202]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.273000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff71f6a60 a2=0 a3=7ffff71f6a4c items=0 ppid=3018 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 05:40:31.276000 audit[3204]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.276000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe7e44dec0 a2=0 a3=7ffe7e44deac items=0 ppid=3018 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 05:40:31.277000 audit[3205]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.277000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbd562050 a2=0 a3=7ffdbd56203c items=0 ppid=3018 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 05:40:31.279000 audit[3207]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.279000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde2785340 a2=0 a3=7ffde278532c items=0 ppid=3018 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:40:31.284000 audit[3210]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 05:40:31.284000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd3733f160 a2=0 a3=7ffd3733f14c items=0 ppid=3018 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 05:40:31.287000 audit[3212]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 05:40:31.287000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff43d4a960 a2=0 a3=7fff43d4a94c items=0 ppid=3018 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.287000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:31.288000 audit[3212]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 05:40:31.288000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff43d4a960 a2=0 a3=7fff43d4a94c items=0 ppid=3018 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:31.288000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:32.840639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445708627.mount: Deactivated successfully. Jan 27 05:40:33.417786 kubelet[2893]: I0127 05:40:33.417716 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mw57j" podStartSLOduration=3.41770301 podStartE2EDuration="3.41770301s" podCreationTimestamp="2026-01-27 05:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:40:31.029334078 +0000 UTC m=+8.159241735" watchObservedRunningTime="2026-01-27 05:40:33.41770301 +0000 UTC m=+10.547610675" Jan 27 05:40:33.521038 containerd[1684]: time="2026-01-27T05:40:33.520987608Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:33.522908 containerd[1684]: time="2026-01-27T05:40:33.522880738Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 27 05:40:33.525649 containerd[1684]: time="2026-01-27T05:40:33.525622470Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:33.528864 containerd[1684]: time="2026-01-27T05:40:33.528350768Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:33.528864 containerd[1684]: time="2026-01-27T05:40:33.528761734Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.392556967s" Jan 27 05:40:33.528864 containerd[1684]: time="2026-01-27T05:40:33.528785719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 27 05:40:33.539475 containerd[1684]: time="2026-01-27T05:40:33.539446508Z" level=info msg="CreateContainer within sandbox \"399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 27 05:40:33.558991 containerd[1684]: time="2026-01-27T05:40:33.558678768Z" level=info msg="Container a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:33.562431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853734553.mount: Deactivated successfully. Jan 27 05:40:33.570898 containerd[1684]: time="2026-01-27T05:40:33.570852486Z" level=info msg="CreateContainer within sandbox \"399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872\"" Jan 27 05:40:33.571522 containerd[1684]: time="2026-01-27T05:40:33.571498000Z" level=info msg="StartContainer for \"a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872\"" Jan 27 05:40:33.572194 containerd[1684]: time="2026-01-27T05:40:33.572172648Z" level=info msg="connecting to shim a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872" address="unix:///run/containerd/s/16665a1858466eb826a91a7cbf7835abde715d8b4299b83d5b902434148520cd" protocol=ttrpc version=3 Jan 27 05:40:33.593210 systemd[1]: Started cri-containerd-a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872.scope - libcontainer container a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872. Jan 27 05:40:33.601000 audit: BPF prog-id=146 op=LOAD Jan 27 05:40:33.602000 audit: BPF prog-id=147 op=LOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=147 op=UNLOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=148 op=LOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=149 op=LOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=149 op=UNLOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=148 op=UNLOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.602000 audit: BPF prog-id=150 op=LOAD Jan 27 05:40:33.602000 audit[3221]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3086 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:33.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134383139333130303435323136303964313666636137303665303431 Jan 27 05:40:33.619411 containerd[1684]: time="2026-01-27T05:40:33.619377519Z" level=info msg="StartContainer for \"a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872\" returns successfully" Jan 27 05:40:35.750780 kubelet[2893]: I0127 05:40:35.750726 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-m5p7d" podStartSLOduration=3.356278551 podStartE2EDuration="5.75071044s" podCreationTimestamp="2026-01-27 05:40:30 +0000 UTC" firstStartedPulling="2026-01-27 05:40:31.135248457 +0000 UTC m=+8.265156100" lastFinishedPulling="2026-01-27 05:40:33.529680346 +0000 UTC m=+10.659587989" observedRunningTime="2026-01-27 05:40:34.043518698 +0000 UTC m=+11.173426385" watchObservedRunningTime="2026-01-27 05:40:35.75071044 +0000 UTC m=+12.880618106" Jan 27 05:40:38.973789 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 27 05:40:38.973897 kernel: audit: type=1106 audit(1769492438.967:520): pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:40:38.967000 audit[1942]: USER_END pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:40:38.968867 sudo[1942]: pam_unix(sudo:session): session closed for user root Jan 27 05:40:38.972000 audit[1942]: CRED_DISP pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:40:38.983838 kernel: audit: type=1104 audit(1769492438.972:521): pid=1942 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 05:40:39.073868 sshd[1941]: Connection closed by 4.153.228.146 port 47650 Jan 27 05:40:39.076174 sshd-session[1937]: pam_unix(sshd:session): session closed for user core Jan 27 05:40:39.076000 audit[1937]: USER_END pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:40:39.084026 kernel: audit: type=1106 audit(1769492439.076:522): pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:40:39.084097 kernel: audit: type=1104 audit(1769492439.076:523): pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:40:39.076000 audit[1937]: CRED_DISP pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:40:39.082345 systemd[1]: sshd@8-10.0.2.139:22-4.153.228.146:47650.service: Deactivated successfully. Jan 27 05:40:39.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.139:22-4.153.228.146:47650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:39.088279 kernel: audit: type=1131 audit(1769492439.082:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.139:22-4.153.228.146:47650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:40:39.087091 systemd[1]: session-10.scope: Deactivated successfully. Jan 27 05:40:39.087303 systemd[1]: session-10.scope: Consumed 5.231s CPU time, 233.5M memory peak. Jan 27 05:40:39.089230 systemd-logind[1646]: Session 10 logged out. Waiting for processes to exit. Jan 27 05:40:39.093353 systemd-logind[1646]: Removed session 10. Jan 27 05:40:39.861000 audit[3300]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:39.866098 kernel: audit: type=1325 audit(1769492439.861:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:39.861000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcda28a7c0 a2=0 a3=7ffcda28a7ac items=0 ppid=3018 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:39.872032 kernel: audit: type=1300 audit(1769492439.861:525): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcda28a7c0 a2=0 a3=7ffcda28a7ac items=0 ppid=3018 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:39.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:39.876025 kernel: audit: type=1327 audit(1769492439.861:525): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:39.865000 audit[3300]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:39.880065 kernel: audit: type=1325 audit(1769492439.865:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:39.865000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcda28a7c0 a2=0 a3=0 items=0 ppid=3018 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:39.885020 kernel: audit: type=1300 audit(1769492439.865:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcda28a7c0 a2=0 a3=0 items=0 ppid=3018 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:39.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:40.880000 audit[3302]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:40.880000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd2bf939c0 a2=0 a3=7ffd2bf939ac items=0 ppid=3018 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:40.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:40.885000 audit[3302]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:40.885000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2bf939c0 a2=0 a3=0 items=0 ppid=3018 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:40.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:41.905000 audit[3304]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:41.905000 audit[3304]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe53bc0130 a2=0 a3=7ffe53bc011c items=0 ppid=3018 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:41.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:41.910000 audit[3304]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:41.910000 audit[3304]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe53bc0130 a2=0 a3=0 items=0 ppid=3018 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:41.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:43.164000 audit[3309]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:43.164000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9dba5880 a2=0 a3=7ffd9dba586c items=0 ppid=3018 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:43.179929 systemd[1]: Created slice kubepods-besteffort-pod50ae8343_14d8_4105_b0c1_1a3caafce2e2.slice - libcontainer container kubepods-besteffort-pod50ae8343_14d8_4105_b0c1_1a3caafce2e2.slice. Jan 27 05:40:43.179000 audit[3309]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:43.179000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9dba5880 a2=0 a3=0 items=0 ppid=3018 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:43.185005 kubelet[2893]: I0127 05:40:43.184874 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ae8343-14d8-4105-b0c1-1a3caafce2e2-tigera-ca-bundle\") pod \"calico-typha-7cf74b6b47-xhk4g\" (UID: \"50ae8343-14d8-4105-b0c1-1a3caafce2e2\") " pod="calico-system/calico-typha-7cf74b6b47-xhk4g" Jan 27 05:40:43.185005 kubelet[2893]: I0127 05:40:43.184907 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50ae8343-14d8-4105-b0c1-1a3caafce2e2-typha-certs\") pod \"calico-typha-7cf74b6b47-xhk4g\" (UID: \"50ae8343-14d8-4105-b0c1-1a3caafce2e2\") " pod="calico-system/calico-typha-7cf74b6b47-xhk4g" Jan 27 05:40:43.185005 kubelet[2893]: I0127 05:40:43.184924 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l55q\" (UniqueName: \"kubernetes.io/projected/50ae8343-14d8-4105-b0c1-1a3caafce2e2-kube-api-access-4l55q\") pod \"calico-typha-7cf74b6b47-xhk4g\" (UID: \"50ae8343-14d8-4105-b0c1-1a3caafce2e2\") " pod="calico-system/calico-typha-7cf74b6b47-xhk4g" Jan 27 05:40:43.383065 systemd[1]: Created slice kubepods-besteffort-pode25d7853_b7be_4619_9c5b_64d9dfb87152.slice - libcontainer container kubepods-besteffort-pode25d7853_b7be_4619_9c5b_64d9dfb87152.slice. Jan 27 05:40:43.385565 kubelet[2893]: I0127 05:40:43.385493 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-policysync\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.385565 kubelet[2893]: I0127 05:40:43.385520 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-cni-log-dir\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.385565 kubelet[2893]: I0127 05:40:43.385535 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-cni-net-dir\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.385565 kubelet[2893]: I0127 05:40:43.385549 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhd7\" (UniqueName: \"kubernetes.io/projected/e25d7853-b7be-4619-9c5b-64d9dfb87152-kube-api-access-blhd7\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.385790 kubelet[2893]: I0127 05:40:43.385734 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-xtables-lock\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.385790 kubelet[2893]: I0127 05:40:43.385750 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-var-lib-calico\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.386001 kubelet[2893]: I0127 05:40:43.385938 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-lib-modules\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.386178 kubelet[2893]: I0127 05:40:43.385992 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e25d7853-b7be-4619-9c5b-64d9dfb87152-node-certs\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.386178 kubelet[2893]: I0127 05:40:43.386116 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-var-run-calico\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.386178 kubelet[2893]: I0127 05:40:43.386132 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-cni-bin-dir\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.386178 kubelet[2893]: I0127 05:40:43.386145 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e25d7853-b7be-4619-9c5b-64d9dfb87152-flexvol-driver-host\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.388071 kubelet[2893]: I0127 05:40:43.388043 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e25d7853-b7be-4619-9c5b-64d9dfb87152-tigera-ca-bundle\") pod \"calico-node-lr6sz\" (UID: \"e25d7853-b7be-4619-9c5b-64d9dfb87152\") " pod="calico-system/calico-node-lr6sz" Jan 27 05:40:43.486628 containerd[1684]: time="2026-01-27T05:40:43.486543034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf74b6b47-xhk4g,Uid:50ae8343-14d8-4105-b0c1-1a3caafce2e2,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:43.489787 kubelet[2893]: E0127 05:40:43.489737 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.489787 kubelet[2893]: W0127 05:40:43.489754 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.489787 kubelet[2893]: E0127 05:40:43.489770 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.489915 kubelet[2893]: E0127 05:40:43.489904 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.489915 kubelet[2893]: W0127 05:40:43.489909 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.489955 kubelet[2893]: E0127 05:40:43.489916 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.490309 kubelet[2893]: E0127 05:40:43.490061 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.490309 kubelet[2893]: W0127 05:40:43.490069 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.490309 kubelet[2893]: E0127 05:40:43.490076 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.490760 kubelet[2893]: E0127 05:40:43.490685 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.490760 kubelet[2893]: W0127 05:40:43.490698 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.490760 kubelet[2893]: E0127 05:40:43.490707 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.491329 kubelet[2893]: E0127 05:40:43.491316 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.491329 kubelet[2893]: W0127 05:40:43.491328 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.491409 kubelet[2893]: E0127 05:40:43.491338 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.491537 kubelet[2893]: E0127 05:40:43.491502 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.491537 kubelet[2893]: W0127 05:40:43.491509 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.491586 kubelet[2893]: E0127 05:40:43.491515 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.491737 kubelet[2893]: E0127 05:40:43.491682 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.491737 kubelet[2893]: W0127 05:40:43.491690 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.491737 kubelet[2893]: E0127 05:40:43.491697 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.492028 kubelet[2893]: E0127 05:40:43.491944 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.492028 kubelet[2893]: W0127 05:40:43.491954 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.492028 kubelet[2893]: E0127 05:40:43.491961 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.492942 kubelet[2893]: E0127 05:40:43.492923 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.492942 kubelet[2893]: W0127 05:40:43.492936 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.495744 kubelet[2893]: E0127 05:40:43.492946 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.497144 kubelet[2893]: E0127 05:40:43.497130 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.497144 kubelet[2893]: W0127 05:40:43.497143 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.497223 kubelet[2893]: E0127 05:40:43.497153 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.503804 kubelet[2893]: E0127 05:40:43.503787 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.503804 kubelet[2893]: W0127 05:40:43.503800 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.503906 kubelet[2893]: E0127 05:40:43.503813 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.516501 containerd[1684]: time="2026-01-27T05:40:43.516426932Z" level=info msg="connecting to shim a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454" address="unix:///run/containerd/s/b4dda54a8def9bd25310f8d8bf69bec6c2b8e50f2f3c903fe728d8968a399438" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:43.545247 systemd[1]: Started cri-containerd-a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454.scope - libcontainer container a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454. Jan 27 05:40:43.570255 kubelet[2893]: E0127 05:40:43.570192 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:43.572000 audit: BPF prog-id=151 op=LOAD Jan 27 05:40:43.574000 audit: BPF prog-id=152 op=LOAD Jan 27 05:40:43.574000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.574000 audit: BPF prog-id=152 op=UNLOAD Jan 27 05:40:43.574000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.575000 audit: BPF prog-id=153 op=LOAD Jan 27 05:40:43.575000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.575000 audit: BPF prog-id=154 op=LOAD Jan 27 05:40:43.575000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.575000 audit: BPF prog-id=154 op=UNLOAD Jan 27 05:40:43.575000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.575000 audit: BPF prog-id=153 op=UNLOAD Jan 27 05:40:43.575000 audit[3344]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.575000 audit: BPF prog-id=155 op=LOAD Jan 27 05:40:43.575000 audit[3344]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3332 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130306630326339643036666534303137623232356437383564326433 Jan 27 05:40:43.584850 kubelet[2893]: E0127 05:40:43.584831 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.584850 kubelet[2893]: W0127 05:40:43.584848 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.584965 kubelet[2893]: E0127 05:40:43.584863 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.585135 kubelet[2893]: E0127 05:40:43.585119 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.585135 kubelet[2893]: W0127 05:40:43.585129 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.585187 kubelet[2893]: E0127 05:40:43.585137 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.585336 kubelet[2893]: E0127 05:40:43.585326 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.585336 kubelet[2893]: W0127 05:40:43.585332 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.585386 kubelet[2893]: E0127 05:40:43.585340 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.586053 kubelet[2893]: E0127 05:40:43.586040 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.586053 kubelet[2893]: W0127 05:40:43.586051 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.586236 kubelet[2893]: E0127 05:40:43.586061 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.586296 kubelet[2893]: E0127 05:40:43.586264 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.586371 kubelet[2893]: W0127 05:40:43.586320 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.586371 kubelet[2893]: E0127 05:40:43.586333 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.586507 kubelet[2893]: E0127 05:40:43.586498 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.586507 kubelet[2893]: W0127 05:40:43.586506 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.586556 kubelet[2893]: E0127 05:40:43.586512 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.586796 kubelet[2893]: E0127 05:40:43.586784 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.586825 kubelet[2893]: W0127 05:40:43.586796 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.586825 kubelet[2893]: E0127 05:40:43.586804 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.587057 kubelet[2893]: E0127 05:40:43.587048 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.587057 kubelet[2893]: W0127 05:40:43.587056 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.587152 kubelet[2893]: E0127 05:40:43.587063 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.587385 kubelet[2893]: E0127 05:40:43.587374 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.587385 kubelet[2893]: W0127 05:40:43.587385 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.587530 kubelet[2893]: E0127 05:40:43.587393 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.587724 kubelet[2893]: E0127 05:40:43.587709 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.587770 kubelet[2893]: W0127 05:40:43.587760 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.587793 kubelet[2893]: E0127 05:40:43.587772 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.587974 kubelet[2893]: E0127 05:40:43.587965 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588049 kubelet[2893]: W0127 05:40:43.587988 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588049 kubelet[2893]: E0127 05:40:43.587995 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.588228 kubelet[2893]: E0127 05:40:43.588219 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588228 kubelet[2893]: W0127 05:40:43.588228 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588272 kubelet[2893]: E0127 05:40:43.588235 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.588445 kubelet[2893]: E0127 05:40:43.588389 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588445 kubelet[2893]: W0127 05:40:43.588398 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588445 kubelet[2893]: E0127 05:40:43.588404 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.588613 kubelet[2893]: E0127 05:40:43.588556 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588613 kubelet[2893]: W0127 05:40:43.588561 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588613 kubelet[2893]: E0127 05:40:43.588567 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.588743 kubelet[2893]: E0127 05:40:43.588726 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588743 kubelet[2893]: W0127 05:40:43.588731 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588743 kubelet[2893]: E0127 05:40:43.588737 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.588877 kubelet[2893]: E0127 05:40:43.588869 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.588914 kubelet[2893]: W0127 05:40:43.588877 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.588914 kubelet[2893]: E0127 05:40:43.588884 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589035 kubelet[2893]: E0127 05:40:43.589027 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589035 kubelet[2893]: W0127 05:40:43.589035 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589081 kubelet[2893]: E0127 05:40:43.589040 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589168 kubelet[2893]: E0127 05:40:43.589155 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589191 kubelet[2893]: W0127 05:40:43.589163 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589191 kubelet[2893]: E0127 05:40:43.589174 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589306 kubelet[2893]: E0127 05:40:43.589295 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589306 kubelet[2893]: W0127 05:40:43.589303 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589346 kubelet[2893]: E0127 05:40:43.589309 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589428 kubelet[2893]: E0127 05:40:43.589420 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589453 kubelet[2893]: W0127 05:40:43.589433 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589453 kubelet[2893]: E0127 05:40:43.589439 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589639 kubelet[2893]: E0127 05:40:43.589621 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589639 kubelet[2893]: W0127 05:40:43.589630 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589639 kubelet[2893]: E0127 05:40:43.589636 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589704 kubelet[2893]: I0127 05:40:43.589658 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2fd25125-023e-4bbf-9ed8-e267fcf6bfb3-varrun\") pod \"csi-node-driver-gt29m\" (UID: \"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3\") " pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:43.589808 kubelet[2893]: E0127 05:40:43.589798 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.589808 kubelet[2893]: W0127 05:40:43.589807 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.589854 kubelet[2893]: E0127 05:40:43.589813 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.589934 kubelet[2893]: I0127 05:40:43.589918 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fd25125-023e-4bbf-9ed8-e267fcf6bfb3-kubelet-dir\") pod \"csi-node-driver-gt29m\" (UID: \"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3\") " pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:43.590051 kubelet[2893]: E0127 05:40:43.590042 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.590094 kubelet[2893]: W0127 05:40:43.590051 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.590094 kubelet[2893]: E0127 05:40:43.590058 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.590208 kubelet[2893]: E0127 05:40:43.590192 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.590208 kubelet[2893]: W0127 05:40:43.590200 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.590208 kubelet[2893]: E0127 05:40:43.590206 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.590537 kubelet[2893]: E0127 05:40:43.590347 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.590537 kubelet[2893]: W0127 05:40:43.590352 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.590537 kubelet[2893]: E0127 05:40:43.590358 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.590537 kubelet[2893]: I0127 05:40:43.590369 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2fd25125-023e-4bbf-9ed8-e267fcf6bfb3-registration-dir\") pod \"csi-node-driver-gt29m\" (UID: \"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3\") " pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:43.590537 kubelet[2893]: E0127 05:40:43.590499 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.590537 kubelet[2893]: W0127 05:40:43.590505 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.590537 kubelet[2893]: E0127 05:40:43.590511 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.590537 kubelet[2893]: I0127 05:40:43.590526 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2fd25125-023e-4bbf-9ed8-e267fcf6bfb3-socket-dir\") pod \"csi-node-driver-gt29m\" (UID: \"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3\") " pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:43.590717 kubelet[2893]: E0127 05:40:43.590708 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.590717 kubelet[2893]: W0127 05:40:43.590717 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.590862 kubelet[2893]: E0127 05:40:43.590723 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.591034 kubelet[2893]: E0127 05:40:43.590878 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.591034 kubelet[2893]: W0127 05:40:43.590882 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.591034 kubelet[2893]: E0127 05:40:43.590888 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.591220 kubelet[2893]: E0127 05:40:43.591049 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.591220 kubelet[2893]: W0127 05:40:43.591054 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.591220 kubelet[2893]: E0127 05:40:43.591060 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.591220 kubelet[2893]: E0127 05:40:43.591174 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.591220 kubelet[2893]: W0127 05:40:43.591179 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.591220 kubelet[2893]: E0127 05:40:43.591196 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.591564 kubelet[2893]: E0127 05:40:43.591321 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.591564 kubelet[2893]: W0127 05:40:43.591327 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.591564 kubelet[2893]: E0127 05:40:43.591333 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.591564 kubelet[2893]: I0127 05:40:43.591348 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zm65\" (UniqueName: \"kubernetes.io/projected/2fd25125-023e-4bbf-9ed8-e267fcf6bfb3-kube-api-access-6zm65\") pod \"csi-node-driver-gt29m\" (UID: \"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3\") " pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:43.591564 kubelet[2893]: E0127 05:40:43.591496 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.591564 kubelet[2893]: W0127 05:40:43.591502 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.591564 kubelet[2893]: E0127 05:40:43.591508 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591633 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.592006 kubelet[2893]: W0127 05:40:43.591638 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591644 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591776 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.592006 kubelet[2893]: W0127 05:40:43.591781 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591786 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591892 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.592006 kubelet[2893]: W0127 05:40:43.591897 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.592006 kubelet[2893]: E0127 05:40:43.591902 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.619759 containerd[1684]: time="2026-01-27T05:40:43.619706685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf74b6b47-xhk4g,Uid:50ae8343-14d8-4105-b0c1-1a3caafce2e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454\"" Jan 27 05:40:43.621757 containerd[1684]: time="2026-01-27T05:40:43.621537698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 27 05:40:43.689197 containerd[1684]: time="2026-01-27T05:40:43.689160785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr6sz,Uid:e25d7853-b7be-4619-9c5b-64d9dfb87152,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:43.691920 kubelet[2893]: E0127 05:40:43.691860 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.691920 kubelet[2893]: W0127 05:40:43.691896 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.691920 kubelet[2893]: E0127 05:40:43.691912 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692112 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.693004 kubelet[2893]: W0127 05:40:43.692118 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692125 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692463 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.693004 kubelet[2893]: W0127 05:40:43.692470 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692478 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692704 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.693004 kubelet[2893]: W0127 05:40:43.692710 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.693004 kubelet[2893]: E0127 05:40:43.692717 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.693549 kubelet[2893]: E0127 05:40:43.693448 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.693549 kubelet[2893]: W0127 05:40:43.693459 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.693549 kubelet[2893]: E0127 05:40:43.693470 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.693806 kubelet[2893]: E0127 05:40:43.693716 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.693806 kubelet[2893]: W0127 05:40:43.693724 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.693806 kubelet[2893]: E0127 05:40:43.693731 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.694031 kubelet[2893]: E0127 05:40:43.693953 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.694031 kubelet[2893]: W0127 05:40:43.693971 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.694031 kubelet[2893]: E0127 05:40:43.693978 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.694302 kubelet[2893]: E0127 05:40:43.694226 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.694302 kubelet[2893]: W0127 05:40:43.694234 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.694302 kubelet[2893]: E0127 05:40:43.694241 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.694454 kubelet[2893]: E0127 05:40:43.694448 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.694501 kubelet[2893]: W0127 05:40:43.694495 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.694553 kubelet[2893]: E0127 05:40:43.694530 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.694804 kubelet[2893]: E0127 05:40:43.694725 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.694804 kubelet[2893]: W0127 05:40:43.694732 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.694804 kubelet[2893]: E0127 05:40:43.694739 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.694934 kubelet[2893]: E0127 05:40:43.694928 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.694969 kubelet[2893]: W0127 05:40:43.694964 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.694969 kubelet[2893]: E0127 05:40:43.694981 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.695291 kubelet[2893]: E0127 05:40:43.695208 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.695291 kubelet[2893]: W0127 05:40:43.695215 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.695291 kubelet[2893]: E0127 05:40:43.695221 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.695444 kubelet[2893]: E0127 05:40:43.695399 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.695444 kubelet[2893]: W0127 05:40:43.695406 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.695444 kubelet[2893]: E0127 05:40:43.695412 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.695699 kubelet[2893]: E0127 05:40:43.695637 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.695699 kubelet[2893]: W0127 05:40:43.695643 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.695699 kubelet[2893]: E0127 05:40:43.695650 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.695860 kubelet[2893]: E0127 05:40:43.695845 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.695936 kubelet[2893]: W0127 05:40:43.695888 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.695936 kubelet[2893]: E0127 05:40:43.695896 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.696198 kubelet[2893]: E0127 05:40:43.696137 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.696198 kubelet[2893]: W0127 05:40:43.696145 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.696198 kubelet[2893]: E0127 05:40:43.696153 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.696388 kubelet[2893]: E0127 05:40:43.696381 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.696439 kubelet[2893]: W0127 05:40:43.696428 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.696517 kubelet[2893]: E0127 05:40:43.696473 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.696776 kubelet[2893]: E0127 05:40:43.696737 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.696776 kubelet[2893]: W0127 05:40:43.696745 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.696776 kubelet[2893]: E0127 05:40:43.696754 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.697047 kubelet[2893]: E0127 05:40:43.696997 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.697431 kubelet[2893]: W0127 05:40:43.697147 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.697431 kubelet[2893]: E0127 05:40:43.697161 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.697515 kubelet[2893]: E0127 05:40:43.697490 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.697563 kubelet[2893]: W0127 05:40:43.697546 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.697600 kubelet[2893]: E0127 05:40:43.697592 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.697873 kubelet[2893]: E0127 05:40:43.697800 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.697873 kubelet[2893]: W0127 05:40:43.697806 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.697873 kubelet[2893]: E0127 05:40:43.697813 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.698149 kubelet[2893]: E0127 05:40:43.698141 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.698457 kubelet[2893]: W0127 05:40:43.698204 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.698536 kubelet[2893]: E0127 05:40:43.698529 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.698828 kubelet[2893]: E0127 05:40:43.698778 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.698828 kubelet[2893]: W0127 05:40:43.698786 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.698828 kubelet[2893]: E0127 05:40:43.698793 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.699116 kubelet[2893]: E0127 05:40:43.699089 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.699196 kubelet[2893]: W0127 05:40:43.699097 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.699196 kubelet[2893]: E0127 05:40:43.699158 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.700074 kubelet[2893]: E0127 05:40:43.699494 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.700074 kubelet[2893]: W0127 05:40:43.699502 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.700074 kubelet[2893]: E0127 05:40:43.699509 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.705319 kubelet[2893]: E0127 05:40:43.705269 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:43.705319 kubelet[2893]: W0127 05:40:43.705282 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:43.705319 kubelet[2893]: E0127 05:40:43.705294 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:43.719417 containerd[1684]: time="2026-01-27T05:40:43.719368123Z" level=info msg="connecting to shim 11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856" address="unix:///run/containerd/s/9356b4f30b36d09ca1d81f6d99e6c41b49c94fc2da6620f4ab96e32229175607" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:40:43.745204 systemd[1]: Started cri-containerd-11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856.scope - libcontainer container 11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856. Jan 27 05:40:43.755000 audit: BPF prog-id=156 op=LOAD Jan 27 05:40:43.755000 audit: BPF prog-id=157 op=LOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=157 op=UNLOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=158 op=LOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=159 op=LOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=159 op=UNLOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=158 op=UNLOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.755000 audit: BPF prog-id=160 op=LOAD Jan 27 05:40:43.755000 audit[3460]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3449 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:43.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636165623032646237303039303964303131636237303266386334 Jan 27 05:40:43.772897 containerd[1684]: time="2026-01-27T05:40:43.772700617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr6sz,Uid:e25d7853-b7be-4619-9c5b-64d9dfb87152,Namespace:calico-system,Attempt:0,} returns sandbox id \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\"" Jan 27 05:40:44.201000 audit[3486]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:44.203547 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 27 05:40:44.203600 kernel: audit: type=1325 audit(1769492444.201:549): table=filter:113 family=2 entries=22 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:44.201000 audit[3486]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd4d29870 a2=0 a3=7ffcd4d2985c items=0 ppid=3018 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:44.209536 kernel: audit: type=1300 audit(1769492444.201:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd4d29870 a2=0 a3=7ffcd4d2985c items=0 ppid=3018 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:44.213051 kernel: audit: type=1327 audit(1769492444.201:549): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:44.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:44.207000 audit[3486]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:44.218035 kernel: audit: type=1325 audit(1769492444.207:550): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:44.207000 audit[3486]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd4d29870 a2=0 a3=0 items=0 ppid=3018 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:44.224034 kernel: audit: type=1300 audit(1769492444.207:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd4d29870 a2=0 a3=0 items=0 ppid=3018 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:44.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:44.227035 kernel: audit: type=1327 audit(1769492444.207:550): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:45.094703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885652461.mount: Deactivated successfully. Jan 27 05:40:45.974078 kubelet[2893]: E0127 05:40:45.974042 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:46.259779 containerd[1684]: time="2026-01-27T05:40:46.259257539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:46.260629 containerd[1684]: time="2026-01-27T05:40:46.260610888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 27 05:40:46.262691 containerd[1684]: time="2026-01-27T05:40:46.262675266Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:46.268321 containerd[1684]: time="2026-01-27T05:40:46.268291452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:46.269151 containerd[1684]: time="2026-01-27T05:40:46.269126573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.647561538s" Jan 27 05:40:46.269521 containerd[1684]: time="2026-01-27T05:40:46.269495090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 27 05:40:46.270720 containerd[1684]: time="2026-01-27T05:40:46.270701183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 27 05:40:46.286508 containerd[1684]: time="2026-01-27T05:40:46.286440981Z" level=info msg="CreateContainer within sandbox \"a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 27 05:40:46.298128 containerd[1684]: time="2026-01-27T05:40:46.298087129Z" level=info msg="Container 25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:46.303056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506047888.mount: Deactivated successfully. Jan 27 05:40:46.311900 containerd[1684]: time="2026-01-27T05:40:46.311865067Z" level=info msg="CreateContainer within sandbox \"a00f02c9d06fe4017b225d785d2d3af93afc7ce2d647797e8788ee55e5dd4454\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5\"" Jan 27 05:40:46.313280 containerd[1684]: time="2026-01-27T05:40:46.313259391Z" level=info msg="StartContainer for \"25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5\"" Jan 27 05:40:46.314348 containerd[1684]: time="2026-01-27T05:40:46.314306795Z" level=info msg="connecting to shim 25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5" address="unix:///run/containerd/s/b4dda54a8def9bd25310f8d8bf69bec6c2b8e50f2f3c903fe728d8968a399438" protocol=ttrpc version=3 Jan 27 05:40:46.337339 systemd[1]: Started cri-containerd-25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5.scope - libcontainer container 25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5. Jan 27 05:40:46.348000 audit: BPF prog-id=161 op=LOAD Jan 27 05:40:46.352039 kernel: audit: type=1334 audit(1769492446.348:551): prog-id=161 op=LOAD Jan 27 05:40:46.352082 kernel: audit: type=1334 audit(1769492446.350:552): prog-id=162 op=LOAD Jan 27 05:40:46.350000 audit: BPF prog-id=162 op=LOAD Jan 27 05:40:46.350000 audit[3497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.355804 kernel: audit: type=1300 audit(1769492446.350:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.360335 kernel: audit: type=1327 audit(1769492446.350:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.350000 audit: BPF prog-id=162 op=UNLOAD Jan 27 05:40:46.350000 audit[3497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.352000 audit: BPF prog-id=163 op=LOAD Jan 27 05:40:46.352000 audit[3497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.352000 audit: BPF prog-id=164 op=LOAD Jan 27 05:40:46.352000 audit[3497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.352000 audit: BPF prog-id=164 op=UNLOAD Jan 27 05:40:46.352000 audit[3497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.352000 audit: BPF prog-id=163 op=UNLOAD Jan 27 05:40:46.352000 audit[3497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.352000 audit: BPF prog-id=165 op=LOAD Jan 27 05:40:46.352000 audit[3497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3332 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:46.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663039363164363366356134303732636336393833383262663432 Jan 27 05:40:46.403224 containerd[1684]: time="2026-01-27T05:40:46.403192629Z" level=info msg="StartContainer for \"25f0961d63f5a4072cc698382bf42f73369f772dbf49bdf4b0e174a2d2e767e5\" returns successfully" Jan 27 05:40:47.114021 kubelet[2893]: E0127 05:40:47.113973 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114021 kubelet[2893]: W0127 05:40:47.113994 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114021 kubelet[2893]: E0127 05:40:47.114026 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114401 kubelet[2893]: E0127 05:40:47.114179 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114401 kubelet[2893]: W0127 05:40:47.114185 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114401 kubelet[2893]: E0127 05:40:47.114192 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114401 kubelet[2893]: E0127 05:40:47.114301 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114401 kubelet[2893]: W0127 05:40:47.114306 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114401 kubelet[2893]: E0127 05:40:47.114312 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114534 kubelet[2893]: E0127 05:40:47.114486 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114534 kubelet[2893]: W0127 05:40:47.114493 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114534 kubelet[2893]: E0127 05:40:47.114499 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114685 kubelet[2893]: E0127 05:40:47.114667 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114685 kubelet[2893]: W0127 05:40:47.114677 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114685 kubelet[2893]: E0127 05:40:47.114683 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114811 kubelet[2893]: E0127 05:40:47.114797 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114811 kubelet[2893]: W0127 05:40:47.114806 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114851 kubelet[2893]: E0127 05:40:47.114812 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.114926 kubelet[2893]: E0127 05:40:47.114913 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.114926 kubelet[2893]: W0127 05:40:47.114921 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.114970 kubelet[2893]: E0127 05:40:47.114926 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115050 kubelet[2893]: E0127 05:40:47.115042 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115050 kubelet[2893]: W0127 05:40:47.115049 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115096 kubelet[2893]: E0127 05:40:47.115055 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115175 kubelet[2893]: E0127 05:40:47.115166 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115175 kubelet[2893]: W0127 05:40:47.115174 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115220 kubelet[2893]: E0127 05:40:47.115180 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115290 kubelet[2893]: E0127 05:40:47.115282 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115290 kubelet[2893]: W0127 05:40:47.115289 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115330 kubelet[2893]: E0127 05:40:47.115295 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115407 kubelet[2893]: E0127 05:40:47.115399 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115407 kubelet[2893]: W0127 05:40:47.115406 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115449 kubelet[2893]: E0127 05:40:47.115412 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115522 kubelet[2893]: E0127 05:40:47.115514 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115522 kubelet[2893]: W0127 05:40:47.115522 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115562 kubelet[2893]: E0127 05:40:47.115527 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115646 kubelet[2893]: E0127 05:40:47.115637 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115646 kubelet[2893]: W0127 05:40:47.115645 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115692 kubelet[2893]: E0127 05:40:47.115651 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115771 kubelet[2893]: E0127 05:40:47.115762 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115771 kubelet[2893]: W0127 05:40:47.115770 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115811 kubelet[2893]: E0127 05:40:47.115777 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.115885 kubelet[2893]: E0127 05:40:47.115877 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.115909 kubelet[2893]: W0127 05:40:47.115885 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.115909 kubelet[2893]: E0127 05:40:47.115890 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117161 kubelet[2893]: E0127 05:40:47.117152 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117161 kubelet[2893]: W0127 05:40:47.117161 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117236 kubelet[2893]: E0127 05:40:47.117169 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117312 kubelet[2893]: E0127 05:40:47.117304 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117312 kubelet[2893]: W0127 05:40:47.117311 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117432 kubelet[2893]: E0127 05:40:47.117317 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117457 kubelet[2893]: E0127 05:40:47.117448 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117457 kubelet[2893]: W0127 05:40:47.117453 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117493 kubelet[2893]: E0127 05:40:47.117459 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117602 kubelet[2893]: E0127 05:40:47.117594 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117602 kubelet[2893]: W0127 05:40:47.117602 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117663 kubelet[2893]: E0127 05:40:47.117608 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117737 kubelet[2893]: E0127 05:40:47.117729 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117737 kubelet[2893]: W0127 05:40:47.117737 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117847 kubelet[2893]: E0127 05:40:47.117743 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.117870 kubelet[2893]: E0127 05:40:47.117860 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.117870 kubelet[2893]: W0127 05:40:47.117865 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.117911 kubelet[2893]: E0127 05:40:47.117871 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.118035 kubelet[2893]: E0127 05:40:47.118025 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.118035 kubelet[2893]: W0127 05:40:47.118033 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.118079 kubelet[2893]: E0127 05:40:47.118039 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.118322 kubelet[2893]: E0127 05:40:47.118280 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.118322 kubelet[2893]: W0127 05:40:47.118297 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.118428 kubelet[2893]: E0127 05:40:47.118311 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.118559 kubelet[2893]: E0127 05:40:47.118552 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.118595 kubelet[2893]: W0127 05:40:47.118589 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.118632 kubelet[2893]: E0127 05:40:47.118626 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.118853 kubelet[2893]: E0127 05:40:47.118802 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.118853 kubelet[2893]: W0127 05:40:47.118810 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.118853 kubelet[2893]: E0127 05:40:47.118816 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.119037 kubelet[2893]: E0127 05:40:47.119031 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.119118 kubelet[2893]: W0127 05:40:47.119078 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.119118 kubelet[2893]: E0127 05:40:47.119087 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.119267 kubelet[2893]: E0127 05:40:47.119262 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.119395 kubelet[2893]: W0127 05:40:47.119302 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.119395 kubelet[2893]: E0127 05:40:47.119310 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.119475 kubelet[2893]: E0127 05:40:47.119469 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.119505 kubelet[2893]: W0127 05:40:47.119500 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.119634 kubelet[2893]: E0127 05:40:47.119535 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.119724 kubelet[2893]: E0127 05:40:47.119713 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.119754 kubelet[2893]: W0127 05:40:47.119723 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.119754 kubelet[2893]: E0127 05:40:47.119732 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.119870 kubelet[2893]: E0127 05:40:47.119861 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.119870 kubelet[2893]: W0127 05:40:47.119868 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.119928 kubelet[2893]: E0127 05:40:47.119874 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.120000 kubelet[2893]: E0127 05:40:47.119993 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.120000 kubelet[2893]: W0127 05:40:47.120000 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.120060 kubelet[2893]: E0127 05:40:47.120006 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.120244 kubelet[2893]: E0127 05:40:47.120209 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.120244 kubelet[2893]: W0127 05:40:47.120218 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.120244 kubelet[2893]: E0127 05:40:47.120227 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.120404 kubelet[2893]: E0127 05:40:47.120395 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 05:40:47.120433 kubelet[2893]: W0127 05:40:47.120404 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 05:40:47.120433 kubelet[2893]: E0127 05:40:47.120411 2893 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 05:40:47.820155 containerd[1684]: time="2026-01-27T05:40:47.820103695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:47.822519 containerd[1684]: time="2026-01-27T05:40:47.822482165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 27 05:40:47.825524 containerd[1684]: time="2026-01-27T05:40:47.825480265Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:47.830024 containerd[1684]: time="2026-01-27T05:40:47.829533944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:47.830181 containerd[1684]: time="2026-01-27T05:40:47.830160921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.559433424s" Jan 27 05:40:47.830237 containerd[1684]: time="2026-01-27T05:40:47.830227568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 27 05:40:47.835568 containerd[1684]: time="2026-01-27T05:40:47.835550989Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 27 05:40:47.851076 containerd[1684]: time="2026-01-27T05:40:47.851049122Z" level=info msg="Container 4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:47.862138 containerd[1684]: time="2026-01-27T05:40:47.862111956Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc\"" Jan 27 05:40:47.862900 containerd[1684]: time="2026-01-27T05:40:47.862861478Z" level=info msg="StartContainer for \"4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc\"" Jan 27 05:40:47.865403 containerd[1684]: time="2026-01-27T05:40:47.865373698Z" level=info msg="connecting to shim 4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc" address="unix:///run/containerd/s/9356b4f30b36d09ca1d81f6d99e6c41b49c94fc2da6620f4ab96e32229175607" protocol=ttrpc version=3 Jan 27 05:40:47.890460 systemd[1]: Started cri-containerd-4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc.scope - libcontainer container 4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc. Jan 27 05:40:47.934000 audit: BPF prog-id=166 op=LOAD Jan 27 05:40:47.934000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3449 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646438386261613564613935303338633537303131313164333030 Jan 27 05:40:47.934000 audit: BPF prog-id=167 op=LOAD Jan 27 05:40:47.934000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3449 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646438386261613564613935303338633537303131313164333030 Jan 27 05:40:47.934000 audit: BPF prog-id=167 op=UNLOAD Jan 27 05:40:47.934000 audit[3573]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646438386261613564613935303338633537303131313164333030 Jan 27 05:40:47.934000 audit: BPF prog-id=166 op=UNLOAD Jan 27 05:40:47.934000 audit[3573]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646438386261613564613935303338633537303131313164333030 Jan 27 05:40:47.934000 audit: BPF prog-id=168 op=LOAD Jan 27 05:40:47.934000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3449 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646438386261613564613935303338633537303131313164333030 Jan 27 05:40:47.960271 containerd[1684]: time="2026-01-27T05:40:47.960220805Z" level=info msg="StartContainer for \"4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc\" returns successfully" Jan 27 05:40:47.970337 systemd[1]: cri-containerd-4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc.scope: Deactivated successfully. Jan 27 05:40:47.971000 audit: BPF prog-id=168 op=UNLOAD Jan 27 05:40:47.974490 containerd[1684]: time="2026-01-27T05:40:47.973576277Z" level=info msg="received container exit event container_id:\"4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc\" id:\"4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc\" pid:3586 exited_at:{seconds:1769492447 nanos:972519765}" Jan 27 05:40:47.974566 kubelet[2893]: E0127 05:40:47.973850 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:47.999887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4edd88baa5da95038c5701111d3005942a88bed3c223f8241bacc796d5a8b7bc-rootfs.mount: Deactivated successfully. Jan 27 05:40:48.059265 kubelet[2893]: I0127 05:40:48.059240 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 05:40:48.075485 kubelet[2893]: I0127 05:40:48.075169 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf74b6b47-xhk4g" podStartSLOduration=2.425681064 podStartE2EDuration="5.075153324s" podCreationTimestamp="2026-01-27 05:40:43 +0000 UTC" firstStartedPulling="2026-01-27 05:40:43.620724857 +0000 UTC m=+20.750632500" lastFinishedPulling="2026-01-27 05:40:46.270197116 +0000 UTC m=+23.400104760" observedRunningTime="2026-01-27 05:40:47.069878025 +0000 UTC m=+24.199785686" watchObservedRunningTime="2026-01-27 05:40:48.075153324 +0000 UTC m=+25.205060984" Jan 27 05:40:49.067654 containerd[1684]: time="2026-01-27T05:40:49.067362698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 27 05:40:49.973562 kubelet[2893]: E0127 05:40:49.973414 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:51.974129 kubelet[2893]: E0127 05:40:51.973131 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:52.831872 containerd[1684]: time="2026-01-27T05:40:52.831819678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:52.833369 containerd[1684]: time="2026-01-27T05:40:52.833228481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 27 05:40:52.835231 containerd[1684]: time="2026-01-27T05:40:52.835205448Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:52.838432 containerd[1684]: time="2026-01-27T05:40:52.838398941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:40:52.838939 containerd[1684]: time="2026-01-27T05:40:52.838910424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.77150199s" Jan 27 05:40:52.838939 containerd[1684]: time="2026-01-27T05:40:52.838932611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 27 05:40:52.844581 containerd[1684]: time="2026-01-27T05:40:52.844502621Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 27 05:40:52.863162 containerd[1684]: time="2026-01-27T05:40:52.862744742Z" level=info msg="Container a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:40:52.875681 containerd[1684]: time="2026-01-27T05:40:52.875594034Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c\"" Jan 27 05:40:52.877239 containerd[1684]: time="2026-01-27T05:40:52.877215868Z" level=info msg="StartContainer for \"a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c\"" Jan 27 05:40:52.878666 containerd[1684]: time="2026-01-27T05:40:52.878602305Z" level=info msg="connecting to shim a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c" address="unix:///run/containerd/s/9356b4f30b36d09ca1d81f6d99e6c41b49c94fc2da6620f4ab96e32229175607" protocol=ttrpc version=3 Jan 27 05:40:52.900211 systemd[1]: Started cri-containerd-a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c.scope - libcontainer container a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c. Jan 27 05:40:52.950993 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 27 05:40:52.951137 kernel: audit: type=1334 audit(1769492452.947:565): prog-id=169 op=LOAD Jan 27 05:40:52.947000 audit: BPF prog-id=169 op=LOAD Jan 27 05:40:52.947000 audit[3632]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.958310 kernel: audit: type=1300 audit(1769492452.947:565): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.958396 kernel: audit: type=1327 audit(1769492452.947:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.949000 audit: BPF prog-id=170 op=LOAD Jan 27 05:40:52.961598 kernel: audit: type=1334 audit(1769492452.949:566): prog-id=170 op=LOAD Jan 27 05:40:52.949000 audit[3632]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.968026 kernel: audit: type=1300 audit(1769492452.949:566): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.974199 kernel: audit: type=1327 audit(1769492452.949:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.949000 audit: BPF prog-id=170 op=UNLOAD Jan 27 05:40:52.977239 kernel: audit: type=1334 audit(1769492452.949:567): prog-id=170 op=UNLOAD Jan 27 05:40:52.949000 audit[3632]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.982156 kernel: audit: type=1300 audit(1769492452.949:567): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.982251 kernel: audit: type=1327 audit(1769492452.949:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.949000 audit: BPF prog-id=169 op=UNLOAD Jan 27 05:40:52.987126 kernel: audit: type=1334 audit(1769492452.949:568): prog-id=169 op=UNLOAD Jan 27 05:40:52.949000 audit[3632]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.949000 audit: BPF prog-id=171 op=LOAD Jan 27 05:40:52.949000 audit[3632]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3449 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:52.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132663966623161313861333034333239333130643532383762366666 Jan 27 05:40:52.994695 containerd[1684]: time="2026-01-27T05:40:52.994663355Z" level=info msg="StartContainer for \"a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c\" returns successfully" Jan 27 05:40:53.580620 containerd[1684]: time="2026-01-27T05:40:53.580566895Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 05:40:53.583273 systemd[1]: cri-containerd-a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c.scope: Deactivated successfully. Jan 27 05:40:53.584128 systemd[1]: cri-containerd-a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c.scope: Consumed 490ms CPU time, 196.5M memory peak, 171.3M written to disk. Jan 27 05:40:53.585000 audit: BPF prog-id=171 op=UNLOAD Jan 27 05:40:53.586454 containerd[1684]: time="2026-01-27T05:40:53.585885632Z" level=info msg="received container exit event container_id:\"a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c\" id:\"a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c\" pid:3644 exited_at:{seconds:1769492453 nanos:585695094}" Jan 27 05:40:53.593215 kubelet[2893]: I0127 05:40:53.593193 2893 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 27 05:40:53.618582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f9fb1a18a304329310d5287b6ff7225ac5b3aa2f4e37df1df6c35fd5d4c87c-rootfs.mount: Deactivated successfully. Jan 27 05:40:53.654139 systemd[1]: Created slice kubepods-burstable-poddf8cdf47_c52e_4d93_8feb_b0f69767774a.slice - libcontainer container kubepods-burstable-poddf8cdf47_c52e_4d93_8feb_b0f69767774a.slice. Jan 27 05:40:53.666883 systemd[1]: Created slice kubepods-besteffort-pod5de8dbc5_cf50_4a41_99c0_153b9e80ac79.slice - libcontainer container kubepods-besteffort-pod5de8dbc5_cf50_4a41_99c0_153b9e80ac79.slice. Jan 27 05:40:53.675779 systemd[1]: Created slice kubepods-besteffort-pode8248550_dadc_499c_aab6_b47350ead3d7.slice - libcontainer container kubepods-besteffort-pode8248550_dadc_499c_aab6_b47350ead3d7.slice. Jan 27 05:40:53.685944 systemd[1]: Created slice kubepods-besteffort-poda4123b97_2d89_42ef_9011_27c5f71176fd.slice - libcontainer container kubepods-besteffort-poda4123b97_2d89_42ef_9011_27c5f71176fd.slice. Jan 27 05:40:53.697544 systemd[1]: Created slice kubepods-besteffort-pod37c3a4c8_acf6_4f56_beff_6dcab7eb2ee8.slice - libcontainer container kubepods-besteffort-pod37c3a4c8_acf6_4f56_beff_6dcab7eb2ee8.slice. Jan 27 05:40:53.707678 systemd[1]: Created slice kubepods-besteffort-pod627719fb_0c0e_4f7d_a570_d19f7c72ca81.slice - libcontainer container kubepods-besteffort-pod627719fb_0c0e_4f7d_a570_d19f7c72ca81.slice. Jan 27 05:40:53.712464 systemd[1]: Created slice kubepods-besteffort-podfe2009bb_659b_4702_9d71_7f42d09232f9.slice - libcontainer container kubepods-besteffort-podfe2009bb_659b_4702_9d71_7f42d09232f9.slice. Jan 27 05:40:53.717528 systemd[1]: Created slice kubepods-burstable-pod39bafd82_e379_46da_b710_b960cdbdd540.slice - libcontainer container kubepods-burstable-pod39bafd82_e379_46da_b710_b960cdbdd540.slice. Jan 27 05:40:53.760231 kubelet[2893]: I0127 05:40:53.760184 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzbb\" (UniqueName: \"kubernetes.io/projected/39bafd82-e379-46da-b710-b960cdbdd540-kube-api-access-kgzbb\") pod \"coredns-66bc5c9577-zwfnm\" (UID: \"39bafd82-e379-46da-b710-b960cdbdd540\") " pod="kube-system/coredns-66bc5c9577-zwfnm" Jan 27 05:40:53.760397 kubelet[2893]: I0127 05:40:53.760245 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5de8dbc5-cf50-4a41-99c0-153b9e80ac79-calico-apiserver-certs\") pod \"calico-apiserver-5b4f587895-dm4v4\" (UID: \"5de8dbc5-cf50-4a41-99c0-153b9e80ac79\") " pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" Jan 27 05:40:53.760397 kubelet[2893]: I0127 05:40:53.760269 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4fcm\" (UniqueName: \"kubernetes.io/projected/df8cdf47-c52e-4d93-8feb-b0f69767774a-kube-api-access-v4fcm\") pod \"coredns-66bc5c9577-nvkh6\" (UID: \"df8cdf47-c52e-4d93-8feb-b0f69767774a\") " pod="kube-system/coredns-66bc5c9577-nvkh6" Jan 27 05:40:53.760397 kubelet[2893]: I0127 05:40:53.760289 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df8cdf47-c52e-4d93-8feb-b0f69767774a-config-volume\") pod \"coredns-66bc5c9577-nvkh6\" (UID: \"df8cdf47-c52e-4d93-8feb-b0f69767774a\") " pod="kube-system/coredns-66bc5c9577-nvkh6" Jan 27 05:40:53.760397 kubelet[2893]: I0127 05:40:53.760308 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6w65\" (UniqueName: \"kubernetes.io/projected/5de8dbc5-cf50-4a41-99c0-153b9e80ac79-kube-api-access-s6w65\") pod \"calico-apiserver-5b4f587895-dm4v4\" (UID: \"5de8dbc5-cf50-4a41-99c0-153b9e80ac79\") " pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" Jan 27 05:40:53.760397 kubelet[2893]: I0127 05:40:53.760325 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39bafd82-e379-46da-b710-b960cdbdd540-config-volume\") pod \"coredns-66bc5c9577-zwfnm\" (UID: \"39bafd82-e379-46da-b710-b960cdbdd540\") " pod="kube-system/coredns-66bc5c9577-zwfnm" Jan 27 05:40:53.861604 kubelet[2893]: I0127 05:40:53.861000 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jgq\" (UniqueName: \"kubernetes.io/projected/fe2009bb-659b-4702-9d71-7f42d09232f9-kube-api-access-q5jgq\") pod \"whisker-8559cb56b-mjdnn\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " pod="calico-system/whisker-8559cb56b-mjdnn" Jan 27 05:40:53.861707 kubelet[2893]: I0127 05:40:53.861674 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ck9f\" (UniqueName: \"kubernetes.io/projected/37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8-kube-api-access-4ck9f\") pod \"calico-apiserver-64987cbdf8-thlkg\" (UID: \"37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8\") " pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" Jan 27 05:40:53.861747 kubelet[2893]: I0127 05:40:53.861706 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kwc\" (UniqueName: \"kubernetes.io/projected/e8248550-dadc-499c-aab6-b47350ead3d7-kube-api-access-g5kwc\") pod \"goldmane-7c778bb748-4t22j\" (UID: \"e8248550-dadc-499c-aab6-b47350ead3d7\") " pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:53.861747 kubelet[2893]: I0127 05:40:53.861733 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-ca-bundle\") pod \"whisker-8559cb56b-mjdnn\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " pod="calico-system/whisker-8559cb56b-mjdnn" Jan 27 05:40:53.861800 kubelet[2893]: I0127 05:40:53.861751 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd82t\" (UniqueName: \"kubernetes.io/projected/a4123b97-2d89-42ef-9011-27c5f71176fd-kube-api-access-fd82t\") pod \"calico-kube-controllers-65f4f5cc45-xxl57\" (UID: \"a4123b97-2d89-42ef-9011-27c5f71176fd\") " pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" Jan 27 05:40:53.861800 kubelet[2893]: I0127 05:40:53.861790 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-backend-key-pair\") pod \"whisker-8559cb56b-mjdnn\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " pod="calico-system/whisker-8559cb56b-mjdnn" Jan 27 05:40:53.861853 kubelet[2893]: I0127 05:40:53.861805 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4123b97-2d89-42ef-9011-27c5f71176fd-tigera-ca-bundle\") pod \"calico-kube-controllers-65f4f5cc45-xxl57\" (UID: \"a4123b97-2d89-42ef-9011-27c5f71176fd\") " pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" Jan 27 05:40:53.861853 kubelet[2893]: I0127 05:40:53.861836 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8248550-dadc-499c-aab6-b47350ead3d7-config\") pod \"goldmane-7c778bb748-4t22j\" (UID: \"e8248550-dadc-499c-aab6-b47350ead3d7\") " pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:53.861904 kubelet[2893]: I0127 05:40:53.861851 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8248550-dadc-499c-aab6-b47350ead3d7-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-4t22j\" (UID: \"e8248550-dadc-499c-aab6-b47350ead3d7\") " pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:53.861904 kubelet[2893]: I0127 05:40:53.861867 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e8248550-dadc-499c-aab6-b47350ead3d7-goldmane-key-pair\") pod \"goldmane-7c778bb748-4t22j\" (UID: \"e8248550-dadc-499c-aab6-b47350ead3d7\") " pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:53.861904 kubelet[2893]: I0127 05:40:53.861883 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/627719fb-0c0e-4f7d-a570-d19f7c72ca81-calico-apiserver-certs\") pod \"calico-apiserver-64987cbdf8-c8xsr\" (UID: \"627719fb-0c0e-4f7d-a570-d19f7c72ca81\") " pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" Jan 27 05:40:53.861993 kubelet[2893]: I0127 05:40:53.861903 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gphrc\" (UniqueName: \"kubernetes.io/projected/627719fb-0c0e-4f7d-a570-d19f7c72ca81-kube-api-access-gphrc\") pod \"calico-apiserver-64987cbdf8-c8xsr\" (UID: \"627719fb-0c0e-4f7d-a570-d19f7c72ca81\") " pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" Jan 27 05:40:53.861993 kubelet[2893]: I0127 05:40:53.861935 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8-calico-apiserver-certs\") pod \"calico-apiserver-64987cbdf8-thlkg\" (UID: \"37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8\") " pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" Jan 27 05:40:53.973215 containerd[1684]: time="2026-01-27T05:40:53.969479669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nvkh6,Uid:df8cdf47-c52e-4d93-8feb-b0f69767774a,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:53.996718 containerd[1684]: time="2026-01-27T05:40:53.996319591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4f587895-dm4v4,Uid:5de8dbc5-cf50-4a41-99c0-153b9e80ac79,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:40:54.005774 systemd[1]: Created slice kubepods-besteffort-pod2fd25125_023e_4bbf_9ed8_e267fcf6bfb3.slice - libcontainer container kubepods-besteffort-pod2fd25125_023e_4bbf_9ed8_e267fcf6bfb3.slice. Jan 27 05:40:54.010665 containerd[1684]: time="2026-01-27T05:40:54.010636498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-thlkg,Uid:37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:40:54.015433 containerd[1684]: time="2026-01-27T05:40:54.015407204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-c8xsr,Uid:627719fb-0c0e-4f7d-a570-d19f7c72ca81,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:40:54.017131 containerd[1684]: time="2026-01-27T05:40:54.017109462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt29m,Uid:2fd25125-023e-4bbf-9ed8-e267fcf6bfb3,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:54.020928 containerd[1684]: time="2026-01-27T05:40:54.020907289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8559cb56b-mjdnn,Uid:fe2009bb-659b-4702-9d71-7f42d09232f9,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:54.024836 containerd[1684]: time="2026-01-27T05:40:54.024773539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zwfnm,Uid:39bafd82-e379-46da-b710-b960cdbdd540,Namespace:kube-system,Attempt:0,}" Jan 27 05:40:54.065033 containerd[1684]: time="2026-01-27T05:40:54.064884898Z" level=error msg="Failed to destroy network for sandbox \"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.075619 containerd[1684]: time="2026-01-27T05:40:54.075554151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nvkh6,Uid:df8cdf47-c52e-4d93-8feb-b0f69767774a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.076027 kubelet[2893]: E0127 05:40:54.075975 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.076442 kubelet[2893]: E0127 05:40:54.076047 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nvkh6" Jan 27 05:40:54.076442 kubelet[2893]: E0127 05:40:54.076065 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nvkh6" Jan 27 05:40:54.076442 kubelet[2893]: E0127 05:40:54.076108 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nvkh6_kube-system(df8cdf47-c52e-4d93-8feb-b0f69767774a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nvkh6_kube-system(df8cdf47-c52e-4d93-8feb-b0f69767774a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f520c4cbf6bc871887343800e9c32e6c8ca1942381810f09ca1d9d3aa15f4707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nvkh6" podUID="df8cdf47-c52e-4d93-8feb-b0f69767774a" Jan 27 05:40:54.089180 containerd[1684]: time="2026-01-27T05:40:54.089152510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 27 05:40:54.161242 containerd[1684]: time="2026-01-27T05:40:54.160681930Z" level=error msg="Failed to destroy network for sandbox \"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.166310 containerd[1684]: time="2026-01-27T05:40:54.166134689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4f587895-dm4v4,Uid:5de8dbc5-cf50-4a41-99c0-153b9e80ac79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.166971 kubelet[2893]: E0127 05:40:54.166344 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.166971 kubelet[2893]: E0127 05:40:54.166464 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" Jan 27 05:40:54.166971 kubelet[2893]: E0127 05:40:54.166481 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" Jan 27 05:40:54.167150 kubelet[2893]: E0127 05:40:54.166538 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e890c72fd2aaeb9d09d88369929457d8ee10f64487f7d6ce7079b2a23042ded\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:40:54.181307 containerd[1684]: time="2026-01-27T05:40:54.181270047Z" level=error msg="Failed to destroy network for sandbox \"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.185955 containerd[1684]: time="2026-01-27T05:40:54.185914228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zwfnm,Uid:39bafd82-e379-46da-b710-b960cdbdd540,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.187688 kubelet[2893]: E0127 05:40:54.187054 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.187688 kubelet[2893]: E0127 05:40:54.187112 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zwfnm" Jan 27 05:40:54.187688 kubelet[2893]: E0127 05:40:54.187148 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zwfnm" Jan 27 05:40:54.187815 kubelet[2893]: E0127 05:40:54.187206 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zwfnm_kube-system(39bafd82-e379-46da-b710-b960cdbdd540)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zwfnm_kube-system(39bafd82-e379-46da-b710-b960cdbdd540)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee115bb690362dea2b67cf0c02b04c4854e501651169ce1a178ec6e6ce40726c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zwfnm" podUID="39bafd82-e379-46da-b710-b960cdbdd540" Jan 27 05:40:54.188931 containerd[1684]: time="2026-01-27T05:40:54.188903516Z" level=error msg="Failed to destroy network for sandbox \"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.192220 containerd[1684]: time="2026-01-27T05:40:54.192189172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-c8xsr,Uid:627719fb-0c0e-4f7d-a570-d19f7c72ca81,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.192433 kubelet[2893]: E0127 05:40:54.192401 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.192480 kubelet[2893]: E0127 05:40:54.192449 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" Jan 27 05:40:54.192480 kubelet[2893]: E0127 05:40:54.192466 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" Jan 27 05:40:54.192539 kubelet[2893]: E0127 05:40:54.192518 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bbb3452cb7bee426c22ae0a7edb55b57c670e74109ac304d3dba62e98d24f5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:40:54.197090 containerd[1684]: time="2026-01-27T05:40:54.197064285Z" level=error msg="Failed to destroy network for sandbox \"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.200582 containerd[1684]: time="2026-01-27T05:40:54.200544130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8559cb56b-mjdnn,Uid:fe2009bb-659b-4702-9d71-7f42d09232f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.202288 kubelet[2893]: E0127 05:40:54.202056 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.202288 kubelet[2893]: E0127 05:40:54.202111 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8559cb56b-mjdnn" Jan 27 05:40:54.202288 kubelet[2893]: E0127 05:40:54.202131 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8559cb56b-mjdnn" Jan 27 05:40:54.202405 kubelet[2893]: E0127 05:40:54.202173 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8559cb56b-mjdnn_calico-system(fe2009bb-659b-4702-9d71-7f42d09232f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8559cb56b-mjdnn_calico-system(fe2009bb-659b-4702-9d71-7f42d09232f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee57064cc6e8b6223f259e8e112cafb77f6e6a27fcc34a4566a87a9dd8358e83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8559cb56b-mjdnn" podUID="fe2009bb-659b-4702-9d71-7f42d09232f9" Jan 27 05:40:54.206820 containerd[1684]: time="2026-01-27T05:40:54.206788585Z" level=error msg="Failed to destroy network for sandbox \"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.210116 containerd[1684]: time="2026-01-27T05:40:54.210082098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-thlkg,Uid:37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.210418 kubelet[2893]: E0127 05:40:54.210248 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.210418 kubelet[2893]: E0127 05:40:54.210304 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" Jan 27 05:40:54.210418 kubelet[2893]: E0127 05:40:54.210321 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" Jan 27 05:40:54.210513 kubelet[2893]: E0127 05:40:54.210370 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf8dbe1053d671db40823a03a5cce1a947655b87f0f5adfc5e8c0d1acb5a685f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:40:54.216672 containerd[1684]: time="2026-01-27T05:40:54.216644131Z" level=error msg="Failed to destroy network for sandbox \"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.220035 containerd[1684]: time="2026-01-27T05:40:54.219998930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt29m,Uid:2fd25125-023e-4bbf-9ed8-e267fcf6bfb3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.220185 kubelet[2893]: E0127 05:40:54.220160 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.220227 kubelet[2893]: E0127 05:40:54.220201 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:54.220227 kubelet[2893]: E0127 05:40:54.220218 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gt29m" Jan 27 05:40:54.220280 kubelet[2893]: E0127 05:40:54.220262 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95b935e24116b6ada76a9a38a9ed3c1dce6ca37dea7d17a142ef0eccc1e90c05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:40:54.285291 containerd[1684]: time="2026-01-27T05:40:54.285126460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4t22j,Uid:e8248550-dadc-499c-aab6-b47350ead3d7,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:54.296553 containerd[1684]: time="2026-01-27T05:40:54.296514296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f4f5cc45-xxl57,Uid:a4123b97-2d89-42ef-9011-27c5f71176fd,Namespace:calico-system,Attempt:0,}" Jan 27 05:40:54.343111 containerd[1684]: time="2026-01-27T05:40:54.342948682Z" level=error msg="Failed to destroy network for sandbox \"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.347630 containerd[1684]: time="2026-01-27T05:40:54.347571731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4t22j,Uid:e8248550-dadc-499c-aab6-b47350ead3d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.348073 kubelet[2893]: E0127 05:40:54.348037 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.348133 kubelet[2893]: E0127 05:40:54.348102 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:54.348133 kubelet[2893]: E0127 05:40:54.348123 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4t22j" Jan 27 05:40:54.348206 kubelet[2893]: E0127 05:40:54.348187 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e4787e90849feaa48785e4659393d863a60ca51a56f16132e8f09de1df96fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:40:54.358363 containerd[1684]: time="2026-01-27T05:40:54.358318708Z" level=error msg="Failed to destroy network for sandbox \"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.361306 containerd[1684]: time="2026-01-27T05:40:54.361270292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f4f5cc45-xxl57,Uid:a4123b97-2d89-42ef-9011-27c5f71176fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.361609 kubelet[2893]: E0127 05:40:54.361472 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 05:40:54.361609 kubelet[2893]: E0127 05:40:54.361532 2893 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" Jan 27 05:40:54.361609 kubelet[2893]: E0127 05:40:54.361549 2893 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" Jan 27 05:40:54.361788 kubelet[2893]: E0127 05:40:54.361754 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8555ef1a7d903552b42b6cfa9b44926751e579bc8f36343152b7f8985039f222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:40:56.401062 kubelet[2893]: I0127 05:40:56.400697 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 05:40:56.429000 audit[3920]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:56.429000 audit[3920]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf5441910 a2=0 a3=7ffcf54418fc items=0 ppid=3018 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:56.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:40:56.434000 audit[3920]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:40:56.434000 audit[3920]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf5441910 a2=0 a3=7ffcf54418fc items=0 ppid=3018 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:40:56.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:00.092195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253533396.mount: Deactivated successfully. Jan 27 05:41:00.120146 containerd[1684]: time="2026-01-27T05:41:00.120101618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:41:00.121919 containerd[1684]: time="2026-01-27T05:41:00.121888300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 27 05:41:00.123574 containerd[1684]: time="2026-01-27T05:41:00.123527233Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:41:00.126441 containerd[1684]: time="2026-01-27T05:41:00.126395466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 05:41:00.127237 containerd[1684]: time="2026-01-27T05:41:00.127196500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.037008082s" Jan 27 05:41:00.127237 containerd[1684]: time="2026-01-27T05:41:00.127223104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 27 05:41:00.155211 containerd[1684]: time="2026-01-27T05:41:00.155156619Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 27 05:41:00.168071 containerd[1684]: time="2026-01-27T05:41:00.168035200Z" level=info msg="Container cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:41:00.180507 containerd[1684]: time="2026-01-27T05:41:00.180449958Z" level=info msg="CreateContainer within sandbox \"11caeb02db700909d011cb702f8c41806fbc73869d0da157544a99094863c856\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67\"" Jan 27 05:41:00.181365 containerd[1684]: time="2026-01-27T05:41:00.181308938Z" level=info msg="StartContainer for \"cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67\"" Jan 27 05:41:00.182856 containerd[1684]: time="2026-01-27T05:41:00.182834548Z" level=info msg="connecting to shim cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67" address="unix:///run/containerd/s/9356b4f30b36d09ca1d81f6d99e6c41b49c94fc2da6620f4ab96e32229175607" protocol=ttrpc version=3 Jan 27 05:41:00.241214 systemd[1]: Started cri-containerd-cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67.scope - libcontainer container cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67. Jan 27 05:41:00.285000 audit: BPF prog-id=172 op=LOAD Jan 27 05:41:00.288412 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 27 05:41:00.288470 kernel: audit: type=1334 audit(1769492460.285:573): prog-id=172 op=LOAD Jan 27 05:41:00.285000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.291707 kernel: audit: type=1300 audit(1769492460.285:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.296200 kernel: audit: type=1327 audit(1769492460.285:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.293000 audit: BPF prog-id=173 op=LOAD Jan 27 05:41:00.299660 kernel: audit: type=1334 audit(1769492460.293:574): prog-id=173 op=LOAD Jan 27 05:41:00.293000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.302293 kernel: audit: type=1300 audit(1769492460.293:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.293000 audit: BPF prog-id=173 op=UNLOAD Jan 27 05:41:00.310587 kernel: audit: type=1327 audit(1769492460.293:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.310627 kernel: audit: type=1334 audit(1769492460.293:575): prog-id=173 op=UNLOAD Jan 27 05:41:00.293000 audit[3927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.313888 kernel: audit: type=1300 audit(1769492460.293:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.319054 kernel: audit: type=1327 audit(1769492460.293:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.293000 audit: BPF prog-id=172 op=UNLOAD Jan 27 05:41:00.322170 kernel: audit: type=1334 audit(1769492460.293:576): prog-id=172 op=UNLOAD Jan 27 05:41:00.293000 audit[3927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.293000 audit: BPF prog-id=174 op=LOAD Jan 27 05:41:00.293000 audit[3927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3449 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:00.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366656637326535376139376534393363323666343739396461363232 Jan 27 05:41:00.338792 containerd[1684]: time="2026-01-27T05:41:00.338648112Z" level=info msg="StartContainer for \"cfef72e57a97e493c26f4799da622dfba3c6fe3663058c1d39704748e44b0c67\" returns successfully" Jan 27 05:41:00.527421 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 27 05:41:00.527535 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 27 05:41:00.704461 kubelet[2893]: I0127 05:41:00.704428 2893 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-ca-bundle\") pod \"fe2009bb-659b-4702-9d71-7f42d09232f9\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " Jan 27 05:41:00.705214 kubelet[2893]: I0127 05:41:00.704596 2893 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5jgq\" (UniqueName: \"kubernetes.io/projected/fe2009bb-659b-4702-9d71-7f42d09232f9-kube-api-access-q5jgq\") pod \"fe2009bb-659b-4702-9d71-7f42d09232f9\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " Jan 27 05:41:00.705214 kubelet[2893]: I0127 05:41:00.704616 2893 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-backend-key-pair\") pod \"fe2009bb-659b-4702-9d71-7f42d09232f9\" (UID: \"fe2009bb-659b-4702-9d71-7f42d09232f9\") " Jan 27 05:41:00.705214 kubelet[2893]: I0127 05:41:00.704836 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fe2009bb-659b-4702-9d71-7f42d09232f9" (UID: "fe2009bb-659b-4702-9d71-7f42d09232f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 27 05:41:00.709344 kubelet[2893]: I0127 05:41:00.709287 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2009bb-659b-4702-9d71-7f42d09232f9-kube-api-access-q5jgq" (OuterVolumeSpecName: "kube-api-access-q5jgq") pod "fe2009bb-659b-4702-9d71-7f42d09232f9" (UID: "fe2009bb-659b-4702-9d71-7f42d09232f9"). InnerVolumeSpecName "kube-api-access-q5jgq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 27 05:41:00.709661 kubelet[2893]: I0127 05:41:00.709615 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fe2009bb-659b-4702-9d71-7f42d09232f9" (UID: "fe2009bb-659b-4702-9d71-7f42d09232f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 27 05:41:00.805655 kubelet[2893]: I0127 05:41:00.805356 2893 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-ca-bundle\") on node \"ci-4592-0-0-n-5ca0d578df\" DevicePath \"\"" Jan 27 05:41:00.805655 kubelet[2893]: I0127 05:41:00.805391 2893 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5jgq\" (UniqueName: \"kubernetes.io/projected/fe2009bb-659b-4702-9d71-7f42d09232f9-kube-api-access-q5jgq\") on node \"ci-4592-0-0-n-5ca0d578df\" DevicePath \"\"" Jan 27 05:41:00.805655 kubelet[2893]: I0127 05:41:00.805400 2893 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe2009bb-659b-4702-9d71-7f42d09232f9-whisker-backend-key-pair\") on node \"ci-4592-0-0-n-5ca0d578df\" DevicePath \"\"" Jan 27 05:41:00.985616 systemd[1]: Removed slice kubepods-besteffort-podfe2009bb_659b_4702_9d71_7f42d09232f9.slice - libcontainer container kubepods-besteffort-podfe2009bb_659b_4702_9d71_7f42d09232f9.slice. Jan 27 05:41:01.094138 systemd[1]: var-lib-kubelet-pods-fe2009bb\x2d659b\x2d4702\x2d9d71\x2d7f42d09232f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq5jgq.mount: Deactivated successfully. Jan 27 05:41:01.094621 systemd[1]: var-lib-kubelet-pods-fe2009bb\x2d659b\x2d4702\x2d9d71\x2d7f42d09232f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 27 05:41:01.129356 kubelet[2893]: I0127 05:41:01.129251 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lr6sz" podStartSLOduration=1.7708143029999999 podStartE2EDuration="18.129234671s" podCreationTimestamp="2026-01-27 05:40:43 +0000 UTC" firstStartedPulling="2026-01-27 05:40:43.774290175 +0000 UTC m=+20.904197819" lastFinishedPulling="2026-01-27 05:41:00.132710545 +0000 UTC m=+37.262618187" observedRunningTime="2026-01-27 05:41:01.127172485 +0000 UTC m=+38.257080165" watchObservedRunningTime="2026-01-27 05:41:01.129234671 +0000 UTC m=+38.259142336" Jan 27 05:41:01.192834 systemd[1]: Created slice kubepods-besteffort-podf422b70f_feda_43c7_ab21_bd446de0a9bb.slice - libcontainer container kubepods-besteffort-podf422b70f_feda_43c7_ab21_bd446de0a9bb.slice. Jan 27 05:41:01.208459 kubelet[2893]: I0127 05:41:01.208419 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f422b70f-feda-43c7-ab21-bd446de0a9bb-whisker-backend-key-pair\") pod \"whisker-66dfc449f6-njgj9\" (UID: \"f422b70f-feda-43c7-ab21-bd446de0a9bb\") " pod="calico-system/whisker-66dfc449f6-njgj9" Jan 27 05:41:01.208958 kubelet[2893]: I0127 05:41:01.208792 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkzx\" (UniqueName: \"kubernetes.io/projected/f422b70f-feda-43c7-ab21-bd446de0a9bb-kube-api-access-ljkzx\") pod \"whisker-66dfc449f6-njgj9\" (UID: \"f422b70f-feda-43c7-ab21-bd446de0a9bb\") " pod="calico-system/whisker-66dfc449f6-njgj9" Jan 27 05:41:01.208958 kubelet[2893]: I0127 05:41:01.208828 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f422b70f-feda-43c7-ab21-bd446de0a9bb-whisker-ca-bundle\") pod \"whisker-66dfc449f6-njgj9\" (UID: \"f422b70f-feda-43c7-ab21-bd446de0a9bb\") " pod="calico-system/whisker-66dfc449f6-njgj9" Jan 27 05:41:01.502521 containerd[1684]: time="2026-01-27T05:41:01.502256895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dfc449f6-njgj9,Uid:f422b70f-feda-43c7-ab21-bd446de0a9bb,Namespace:calico-system,Attempt:0,}" Jan 27 05:41:01.807175 systemd-networkd[1565]: calic4f159b8662: Link UP Jan 27 05:41:01.808271 systemd-networkd[1565]: calic4f159b8662: Gained carrier Jan 27 05:41:01.827548 containerd[1684]: 2026-01-27 05:41:01.538 [INFO][4017] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 05:41:01.827548 containerd[1684]: 2026-01-27 05:41:01.704 [INFO][4017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0 whisker-66dfc449f6- calico-system f422b70f-feda-43c7-ab21-bd446de0a9bb 924 0 2026-01-27 05:41:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66dfc449f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df whisker-66dfc449f6-njgj9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic4f159b8662 [] [] }} ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-" Jan 27 05:41:01.827548 containerd[1684]: 2026-01-27 05:41:01.708 [INFO][4017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.827548 containerd[1684]: 2026-01-27 05:41:01.746 [INFO][4028] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" HandleID="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Workload="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.746 [INFO][4028] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" HandleID="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Workload="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"whisker-66dfc449f6-njgj9", "timestamp":"2026-01-27 05:41:01.746601488 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.746 [INFO][4028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.746 [INFO][4028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.747 [INFO][4028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.754 [INFO][4028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.759 [INFO][4028] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.763 [INFO][4028] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.766 [INFO][4028] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828120 containerd[1684]: 2026-01-27 05:41:01.768 [INFO][4028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.768 [INFO][4028] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.770 [INFO][4028] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635 Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.773 [INFO][4028] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.780 [INFO][4028] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.193/26] block=192.168.36.192/26 handle="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.780 [INFO][4028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.193/26] handle="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.780 [INFO][4028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:01.828322 containerd[1684]: 2026-01-27 05:41:01.780 [INFO][4028] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.193/26] IPv6=[] ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" HandleID="k8s-pod-network.cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Workload="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.828502 containerd[1684]: 2026-01-27 05:41:01.787 [INFO][4017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0", GenerateName:"whisker-66dfc449f6-", Namespace:"calico-system", SelfLink:"", UID:"f422b70f-feda-43c7-ab21-bd446de0a9bb", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 41, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66dfc449f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"whisker-66dfc449f6-njgj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4f159b8662", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:01.828502 containerd[1684]: 2026-01-27 05:41:01.788 [INFO][4017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.193/32] ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.828576 containerd[1684]: 2026-01-27 05:41:01.788 [INFO][4017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4f159b8662 ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.828576 containerd[1684]: 2026-01-27 05:41:01.811 [INFO][4017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.828612 containerd[1684]: 2026-01-27 05:41:01.811 [INFO][4017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0", GenerateName:"whisker-66dfc449f6-", Namespace:"calico-system", SelfLink:"", UID:"f422b70f-feda-43c7-ab21-bd446de0a9bb", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 41, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66dfc449f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635", Pod:"whisker-66dfc449f6-njgj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4f159b8662", MAC:"d6:1c:11:ae:01:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:01.828662 containerd[1684]: 2026-01-27 05:41:01.824 [INFO][4017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" Namespace="calico-system" Pod="whisker-66dfc449f6-njgj9" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-whisker--66dfc449f6--njgj9-eth0" Jan 27 05:41:01.897112 containerd[1684]: time="2026-01-27T05:41:01.897059433Z" level=info msg="connecting to shim cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635" address="unix:///run/containerd/s/ce2c959eea1284d4a5a2e15e3a28f7f25a6c2b08f6f570fef13a092ad13231b9" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:01.924518 systemd[1]: Started cri-containerd-cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635.scope - libcontainer container cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635. Jan 27 05:41:01.942000 audit: BPF prog-id=175 op=LOAD Jan 27 05:41:01.942000 audit: BPF prog-id=176 op=LOAD Jan 27 05:41:01.942000 audit[4117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.942000 audit: BPF prog-id=176 op=UNLOAD Jan 27 05:41:01.942000 audit[4117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.943000 audit: BPF prog-id=177 op=LOAD Jan 27 05:41:01.943000 audit[4117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.944000 audit: BPF prog-id=178 op=LOAD Jan 27 05:41:01.944000 audit[4117]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.944000 audit: BPF prog-id=178 op=UNLOAD Jan 27 05:41:01.944000 audit[4117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.944000 audit: BPF prog-id=177 op=UNLOAD Jan 27 05:41:01.944000 audit[4117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:01.944000 audit: BPF prog-id=179 op=LOAD Jan 27 05:41:01.944000 audit[4117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4101 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:01.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364656362373131323838373237326365353831613137626436313063 Jan 27 05:41:02.008712 containerd[1684]: time="2026-01-27T05:41:02.008673314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dfc449f6-njgj9,Uid:f422b70f-feda-43c7-ab21-bd446de0a9bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdecb7112887272ce581a17bd610c4aa01a26dccc7ebb6ed809f264d81e9e635\"" Jan 27 05:41:02.011764 containerd[1684]: time="2026-01-27T05:41:02.011726420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:41:02.250000 audit: BPF prog-id=180 op=LOAD Jan 27 05:41:02.250000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0a974bc0 a2=98 a3=1fffffffffffffff items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.250000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.250000 audit: BPF prog-id=180 op=UNLOAD Jan 27 05:41:02.250000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0a974b90 a3=0 items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.250000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.251000 audit: BPF prog-id=181 op=LOAD Jan 27 05:41:02.251000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0a974aa0 a2=94 a3=3 items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.251000 audit: BPF prog-id=181 op=UNLOAD Jan 27 05:41:02.251000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff0a974aa0 a2=94 a3=3 items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.251000 audit: BPF prog-id=182 op=LOAD Jan 27 05:41:02.251000 audit[4235]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0a974ae0 a2=94 a3=7fff0a974cc0 items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.251000 audit: BPF prog-id=182 op=UNLOAD Jan 27 05:41:02.251000 audit[4235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff0a974ae0 a2=94 a3=7fff0a974cc0 items=0 ppid=4057 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.251000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 05:41:02.252000 audit: BPF prog-id=183 op=LOAD Jan 27 05:41:02.252000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda6910e50 a2=98 a3=3 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.252000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.252000 audit: BPF prog-id=183 op=UNLOAD Jan 27 05:41:02.252000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda6910e20 a3=0 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.252000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.253000 audit: BPF prog-id=184 op=LOAD Jan 27 05:41:02.253000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda6910c40 a2=94 a3=54428f items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.253000 audit: BPF prog-id=184 op=UNLOAD Jan 27 05:41:02.253000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda6910c40 a2=94 a3=54428f items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.253000 audit: BPF prog-id=185 op=LOAD Jan 27 05:41:02.253000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda6910c70 a2=94 a3=2 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.253000 audit: BPF prog-id=185 op=UNLOAD Jan 27 05:41:02.253000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda6910c70 a2=0 a3=2 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.354916 containerd[1684]: time="2026-01-27T05:41:02.354861446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:02.356881 containerd[1684]: time="2026-01-27T05:41:02.356845332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:41:02.356962 containerd[1684]: time="2026-01-27T05:41:02.356946536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:02.357143 kubelet[2893]: E0127 05:41:02.357116 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:02.357777 kubelet[2893]: E0127 05:41:02.357470 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:02.357862 kubelet[2893]: E0127 05:41:02.357847 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:02.359857 containerd[1684]: time="2026-01-27T05:41:02.359833650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:41:02.430000 audit: BPF prog-id=186 op=LOAD Jan 27 05:41:02.430000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffda6910b30 a2=94 a3=1 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.430000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.431000 audit: BPF prog-id=186 op=UNLOAD Jan 27 05:41:02.431000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffda6910b30 a2=94 a3=1 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.431000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.441000 audit: BPF prog-id=187 op=LOAD Jan 27 05:41:02.441000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda6910b20 a2=94 a3=4 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.441000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.442000 audit: BPF prog-id=187 op=UNLOAD Jan 27 05:41:02.442000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda6910b20 a2=0 a3=4 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.442000 audit: BPF prog-id=188 op=LOAD Jan 27 05:41:02.442000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffda6910980 a2=94 a3=5 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.442000 audit: BPF prog-id=188 op=UNLOAD Jan 27 05:41:02.442000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffda6910980 a2=0 a3=5 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.442000 audit: BPF prog-id=189 op=LOAD Jan 27 05:41:02.442000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda6910ba0 a2=94 a3=6 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.442000 audit: BPF prog-id=189 op=UNLOAD Jan 27 05:41:02.442000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffda6910ba0 a2=0 a3=6 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.442000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.443000 audit: BPF prog-id=190 op=LOAD Jan 27 05:41:02.443000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffda6910350 a2=94 a3=88 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.443000 audit: BPF prog-id=191 op=LOAD Jan 27 05:41:02.443000 audit[4236]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffda69101d0 a2=94 a3=2 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.443000 audit: BPF prog-id=191 op=UNLOAD Jan 27 05:41:02.443000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffda6910200 a2=0 a3=7ffda6910300 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.443000 audit: BPF prog-id=190 op=UNLOAD Jan 27 05:41:02.443000 audit[4236]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2cb52d10 a2=0 a3=cc666358730afd74 items=0 ppid=4057 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 05:41:02.453000 audit: BPF prog-id=192 op=LOAD Jan 27 05:41:02.453000 audit[4240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd53901a00 a2=98 a3=1999999999999999 items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.453000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.454000 audit: BPF prog-id=192 op=UNLOAD Jan 27 05:41:02.454000 audit[4240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd539019d0 a3=0 items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.454000 audit: BPF prog-id=193 op=LOAD Jan 27 05:41:02.454000 audit[4240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd539018e0 a2=94 a3=ffff items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.454000 audit: BPF prog-id=193 op=UNLOAD Jan 27 05:41:02.454000 audit[4240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd539018e0 a2=94 a3=ffff items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.454000 audit: BPF prog-id=194 op=LOAD Jan 27 05:41:02.454000 audit[4240]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd53901920 a2=94 a3=7ffd53901b00 items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.454000 audit: BPF prog-id=194 op=UNLOAD Jan 27 05:41:02.454000 audit[4240]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd53901920 a2=94 a3=7ffd53901b00 items=0 ppid=4057 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.454000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 05:41:02.518532 systemd-networkd[1565]: vxlan.calico: Link UP Jan 27 05:41:02.518539 systemd-networkd[1565]: vxlan.calico: Gained carrier Jan 27 05:41:02.538000 audit: BPF prog-id=195 op=LOAD Jan 27 05:41:02.538000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04527cb0 a2=98 a3=0 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.538000 audit: BPF prog-id=195 op=UNLOAD Jan 27 05:41:02.538000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc04527c80 a3=0 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.538000 audit: BPF prog-id=196 op=LOAD Jan 27 05:41:02.538000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04527ac0 a2=94 a3=54428f items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.538000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=196 op=UNLOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc04527ac0 a2=94 a3=54428f items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=197 op=LOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc04527af0 a2=94 a3=2 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=197 op=UNLOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc04527af0 a2=0 a3=2 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=198 op=LOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc045278a0 a2=94 a3=4 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=198 op=UNLOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc045278a0 a2=94 a3=4 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=199 op=LOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc045279a0 a2=94 a3=7ffc04527b20 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.539000 audit: BPF prog-id=199 op=UNLOAD Jan 27 05:41:02.539000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc045279a0 a2=0 a3=7ffc04527b20 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.539000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.540000 audit: BPF prog-id=200 op=LOAD Jan 27 05:41:02.540000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc045270d0 a2=94 a3=2 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.540000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.540000 audit: BPF prog-id=200 op=UNLOAD Jan 27 05:41:02.540000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc045270d0 a2=0 a3=2 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.540000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.540000 audit: BPF prog-id=201 op=LOAD Jan 27 05:41:02.540000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc045271d0 a2=94 a3=30 items=0 ppid=4057 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.540000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 05:41:02.554000 audit: BPF prog-id=202 op=LOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff35019ce0 a2=98 a3=0 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.554000 audit: BPF prog-id=202 op=UNLOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff35019cb0 a3=0 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.554000 audit: BPF prog-id=203 op=LOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff35019ad0 a2=94 a3=54428f items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.554000 audit: BPF prog-id=203 op=UNLOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff35019ad0 a2=94 a3=54428f items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.554000 audit: BPF prog-id=204 op=LOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff35019b00 a2=94 a3=2 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.554000 audit: BPF prog-id=204 op=UNLOAD Jan 27 05:41:02.554000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff35019b00 a2=0 a3=2 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.706860 containerd[1684]: time="2026-01-27T05:41:02.706816156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:02.709437 containerd[1684]: time="2026-01-27T05:41:02.709354616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:41:02.709499 containerd[1684]: time="2026-01-27T05:41:02.709425837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:02.709622 kubelet[2893]: E0127 05:41:02.709580 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:02.709667 kubelet[2893]: E0127 05:41:02.709619 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:02.709720 kubelet[2893]: E0127 05:41:02.709688 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:02.709776 kubelet[2893]: E0127 05:41:02.709736 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:02.718000 audit: BPF prog-id=205 op=LOAD Jan 27 05:41:02.718000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff350199c0 a2=94 a3=1 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.718000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.719000 audit: BPF prog-id=205 op=UNLOAD Jan 27 05:41:02.719000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff350199c0 a2=94 a3=1 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.719000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.729000 audit: BPF prog-id=206 op=LOAD Jan 27 05:41:02.729000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff350199b0 a2=94 a3=4 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.729000 audit: BPF prog-id=206 op=UNLOAD Jan 27 05:41:02.729000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff350199b0 a2=0 a3=4 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.729000 audit: BPF prog-id=207 op=LOAD Jan 27 05:41:02.729000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff35019810 a2=94 a3=5 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.729000 audit: BPF prog-id=207 op=UNLOAD Jan 27 05:41:02.729000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff35019810 a2=0 a3=5 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.729000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.730000 audit: BPF prog-id=208 op=LOAD Jan 27 05:41:02.730000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff35019a30 a2=94 a3=6 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.730000 audit: BPF prog-id=208 op=UNLOAD Jan 27 05:41:02.730000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff35019a30 a2=0 a3=6 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.730000 audit: BPF prog-id=209 op=LOAD Jan 27 05:41:02.730000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff350191e0 a2=94 a3=88 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.730000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.731000 audit: BPF prog-id=210 op=LOAD Jan 27 05:41:02.731000 audit[4271]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff35019060 a2=94 a3=2 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.731000 audit: BPF prog-id=210 op=UNLOAD Jan 27 05:41:02.731000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff35019090 a2=0 a3=7fff35019190 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.731000 audit: BPF prog-id=209 op=UNLOAD Jan 27 05:41:02.731000 audit[4271]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3c715d10 a2=0 a3=fd6d049c6653f7c8 items=0 ppid=4057 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 05:41:02.740000 audit: BPF prog-id=201 op=UNLOAD Jan 27 05:41:02.740000 audit[4057]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000e77740 a2=0 a3=0 items=0 ppid=4041 pid=4057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.740000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 27 05:41:02.803000 audit[4294]: NETFILTER_CFG table=mangle:117 family=2 entries=16 op=nft_register_chain pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:02.803000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffdb102ee0 a2=0 a3=7fffdb102ecc items=0 ppid=4057 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.803000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:02.810000 audit[4296]: NETFILTER_CFG table=nat:118 family=2 entries=15 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:02.810000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc5ba89d20 a2=0 a3=7ffc5ba89d0c items=0 ppid=4057 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.810000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:02.814000 audit[4293]: NETFILTER_CFG table=raw:119 family=2 entries=21 op=nft_register_chain pid=4293 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:02.814000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc1e2a1a70 a2=0 a3=7ffc1e2a1a5c items=0 ppid=4057 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.814000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:02.815000 audit[4295]: NETFILTER_CFG table=filter:120 family=2 entries=94 op=nft_register_chain pid=4295 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:02.815000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffea6b18f00 a2=0 a3=7ffea6b18eec items=0 ppid=4057 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:02.815000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:02.936178 systemd-networkd[1565]: calic4f159b8662: Gained IPv6LL Jan 27 05:41:02.975463 kubelet[2893]: I0127 05:41:02.975323 2893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2009bb-659b-4702-9d71-7f42d09232f9" path="/var/lib/kubelet/pods/fe2009bb-659b-4702-9d71-7f42d09232f9/volumes" Jan 27 05:41:03.118032 kubelet[2893]: E0127 05:41:03.117363 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:03.154000 audit[4309]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:03.154000 audit[4309]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce4f2e7b0 a2=0 a3=7ffce4f2e79c items=0 ppid=3018 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:03.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:03.160000 audit[4309]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:03.160000 audit[4309]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce4f2e7b0 a2=0 a3=0 items=0 ppid=3018 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:03.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:04.216208 systemd-networkd[1565]: vxlan.calico: Gained IPv6LL Jan 27 05:41:04.981989 containerd[1684]: time="2026-01-27T05:41:04.981901973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-c8xsr,Uid:627719fb-0c0e-4f7d-a570-d19f7c72ca81,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:41:04.986645 containerd[1684]: time="2026-01-27T05:41:04.986573044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-thlkg,Uid:37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:41:04.988976 containerd[1684]: time="2026-01-27T05:41:04.988895454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f4f5cc45-xxl57,Uid:a4123b97-2d89-42ef-9011-27c5f71176fd,Namespace:calico-system,Attempt:0,}" Jan 27 05:41:05.171542 systemd-networkd[1565]: cali3e6df4cbaca: Link UP Jan 27 05:41:05.171742 systemd-networkd[1565]: cali3e6df4cbaca: Gained carrier Jan 27 05:41:05.188328 containerd[1684]: 2026-01-27 05:41:05.072 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0 calico-apiserver-64987cbdf8- calico-apiserver 627719fb-0c0e-4f7d-a570-d19f7c72ca81 855 0 2026-01-27 05:40:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64987cbdf8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df calico-apiserver-64987cbdf8-c8xsr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e6df4cbaca [] [] }} ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-" Jan 27 05:41:05.188328 containerd[1684]: 2026-01-27 05:41:05.072 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.188328 containerd[1684]: 2026-01-27 05:41:05.120 [INFO][4346] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" HandleID="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.121 [INFO][4346] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" HandleID="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"calico-apiserver-64987cbdf8-c8xsr", "timestamp":"2026-01-27 05:41:05.120262443 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.121 [INFO][4346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.121 [INFO][4346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.121 [INFO][4346] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.131 [INFO][4346] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.137 [INFO][4346] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.144 [INFO][4346] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.146 [INFO][4346] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188538 containerd[1684]: 2026-01-27 05:41:05.149 [INFO][4346] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.149 [INFO][4346] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.151 [INFO][4346] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.155 [INFO][4346] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.160 [INFO][4346] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.194/26] block=192.168.36.192/26 handle="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.160 [INFO][4346] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.194/26] handle="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.161 [INFO][4346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:05.188735 containerd[1684]: 2026-01-27 05:41:05.161 [INFO][4346] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.194/26] IPv6=[] ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" HandleID="k8s-pod-network.4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.188868 containerd[1684]: 2026-01-27 05:41:05.164 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0", GenerateName:"calico-apiserver-64987cbdf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"627719fb-0c0e-4f7d-a570-d19f7c72ca81", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64987cbdf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"calico-apiserver-64987cbdf8-c8xsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e6df4cbaca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.188918 containerd[1684]: 2026-01-27 05:41:05.164 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.194/32] ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.188918 containerd[1684]: 2026-01-27 05:41:05.164 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e6df4cbaca ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.188918 containerd[1684]: 2026-01-27 05:41:05.172 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.189004 containerd[1684]: 2026-01-27 05:41:05.173 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0", GenerateName:"calico-apiserver-64987cbdf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"627719fb-0c0e-4f7d-a570-d19f7c72ca81", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64987cbdf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde", Pod:"calico-apiserver-64987cbdf8-c8xsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e6df4cbaca", MAC:"ce:1c:02:4c:9b:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.189094 containerd[1684]: 2026-01-27 05:41:05.186 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-c8xsr" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--c8xsr-eth0" Jan 27 05:41:05.201000 audit[4376]: NETFILTER_CFG table=filter:123 family=2 entries=50 op=nft_register_chain pid=4376 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:05.201000 audit[4376]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff93deb840 a2=0 a3=7fff93deb82c items=0 ppid=4057 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.201000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:05.222032 containerd[1684]: time="2026-01-27T05:41:05.221583258Z" level=info msg="connecting to shim 4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde" address="unix:///run/containerd/s/6595ffc481f6902a375b0ee5ddedeb445b9cb23f3f93a16425f3b37aa8042248" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:05.253277 systemd[1]: Started cri-containerd-4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde.scope - libcontainer container 4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde. Jan 27 05:41:05.271229 systemd-networkd[1565]: cali42069ca8335: Link UP Jan 27 05:41:05.272588 systemd-networkd[1565]: cali42069ca8335: Gained carrier Jan 27 05:41:05.274000 audit: BPF prog-id=211 op=LOAD Jan 27 05:41:05.276000 audit: BPF prog-id=212 op=LOAD Jan 27 05:41:05.276000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.278000 audit: BPF prog-id=212 op=UNLOAD Jan 27 05:41:05.278000 audit[4396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.279000 audit: BPF prog-id=213 op=LOAD Jan 27 05:41:05.279000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.279000 audit: BPF prog-id=214 op=LOAD Jan 27 05:41:05.279000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.279000 audit: BPF prog-id=214 op=UNLOAD Jan 27 05:41:05.279000 audit[4396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.279000 audit: BPF prog-id=213 op=UNLOAD Jan 27 05:41:05.279000 audit[4396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.279000 audit: BPF prog-id=215 op=LOAD Jan 27 05:41:05.279000 audit[4396]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4385 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363531363830373833386133366537636634663238393530646137 Jan 27 05:41:05.293170 containerd[1684]: 2026-01-27 05:41:05.097 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0 calico-kube-controllers-65f4f5cc45- calico-system a4123b97-2d89-42ef-9011-27c5f71176fd 851 0 2026-01-27 05:40:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65f4f5cc45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df calico-kube-controllers-65f4f5cc45-xxl57 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali42069ca8335 [] [] }} ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-" Jan 27 05:41:05.293170 containerd[1684]: 2026-01-27 05:41:05.097 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293170 containerd[1684]: 2026-01-27 05:41:05.145 [INFO][4355] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" HandleID="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.145 [INFO][4355] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" HandleID="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"calico-kube-controllers-65f4f5cc45-xxl57", "timestamp":"2026-01-27 05:41:05.145253045 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.145 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.161 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.161 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.232 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.238 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.245 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.247 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293377 containerd[1684]: 2026-01-27 05:41:05.250 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.251 [INFO][4355] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.252 [INFO][4355] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75 Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.257 [INFO][4355] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.263 [INFO][4355] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.195/26] block=192.168.36.192/26 handle="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.264 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.195/26] handle="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.264 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:05.293567 containerd[1684]: 2026-01-27 05:41:05.264 [INFO][4355] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.195/26] IPv6=[] ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" HandleID="k8s-pod-network.d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293701 containerd[1684]: 2026-01-27 05:41:05.267 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0", GenerateName:"calico-kube-controllers-65f4f5cc45-", Namespace:"calico-system", SelfLink:"", UID:"a4123b97-2d89-42ef-9011-27c5f71176fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f4f5cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"calico-kube-controllers-65f4f5cc45-xxl57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42069ca8335", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.293752 containerd[1684]: 2026-01-27 05:41:05.267 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.195/32] ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293752 containerd[1684]: 2026-01-27 05:41:05.267 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42069ca8335 ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293752 containerd[1684]: 2026-01-27 05:41:05.272 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.293817 containerd[1684]: 2026-01-27 05:41:05.275 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0", GenerateName:"calico-kube-controllers-65f4f5cc45-", Namespace:"calico-system", SelfLink:"", UID:"a4123b97-2d89-42ef-9011-27c5f71176fd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f4f5cc45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75", Pod:"calico-kube-controllers-65f4f5cc45-xxl57", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42069ca8335", MAC:"06:9a:91:23:30:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.293866 containerd[1684]: 2026-01-27 05:41:05.291 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" Namespace="calico-system" Pod="calico-kube-controllers-65f4f5cc45-xxl57" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--kube--controllers--65f4f5cc45--xxl57-eth0" Jan 27 05:41:05.312588 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 27 05:41:05.312761 kernel: audit: type=1325 audit(1769492465.307:663): table=filter:124 family=2 entries=40 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:05.307000 audit[4423]: NETFILTER_CFG table=filter:124 family=2 entries=40 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:05.317178 kernel: audit: type=1300 audit(1769492465.307:663): arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcf1356aa0 a2=0 a3=7ffcf1356a8c items=0 ppid=4057 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.307000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcf1356aa0 a2=0 a3=7ffcf1356a8c items=0 ppid=4057 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.318400 kernel: audit: type=1327 audit(1769492465.307:663): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:05.307000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:05.325678 containerd[1684]: time="2026-01-27T05:41:05.325648102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-c8xsr,Uid:627719fb-0c0e-4f7d-a570-d19f7c72ca81,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c6516807838a36e7cf4f28950da7391e4e94f97ea941ee3eece36a48def6dde\"" Jan 27 05:41:05.327201 containerd[1684]: time="2026-01-27T05:41:05.327181855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:05.338337 containerd[1684]: time="2026-01-27T05:41:05.338237762Z" level=info msg="connecting to shim d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75" address="unix:///run/containerd/s/38f0611b55d21fa86e5d1278328905ac3ae6c6dd86704513e229547be5e3906f" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:05.367344 systemd[1]: Started cri-containerd-d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75.scope - libcontainer container d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75. Jan 27 05:41:05.371813 systemd-networkd[1565]: calie793e30fb89: Link UP Jan 27 05:41:05.373187 systemd-networkd[1565]: calie793e30fb89: Gained carrier Jan 27 05:41:05.386000 audit: BPF prog-id=216 op=LOAD Jan 27 05:41:05.390034 kernel: audit: type=1334 audit(1769492465.386:664): prog-id=216 op=LOAD Jan 27 05:41:05.390000 audit: BPF prog-id=217 op=LOAD Jan 27 05:41:05.393025 kernel: audit: type=1334 audit(1769492465.390:665): prog-id=217 op=LOAD Jan 27 05:41:05.390000 audit[4451]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.393735 containerd[1684]: 2026-01-27 05:41:05.107 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0 calico-apiserver-64987cbdf8- calico-apiserver 37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8 853 0 2026-01-27 05:40:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64987cbdf8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df calico-apiserver-64987cbdf8-thlkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie793e30fb89 [] [] }} ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-" Jan 27 05:41:05.393735 containerd[1684]: 2026-01-27 05:41:05.108 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.393735 containerd[1684]: 2026-01-27 05:41:05.153 [INFO][4361] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" HandleID="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.153 [INFO][4361] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" HandleID="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"calico-apiserver-64987cbdf8-thlkg", "timestamp":"2026-01-27 05:41:05.153320039 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.153 [INFO][4361] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.264 [INFO][4361] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.264 [INFO][4361] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.333 [INFO][4361] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.339 [INFO][4361] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.344 [INFO][4361] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.347 [INFO][4361] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.393866 containerd[1684]: 2026-01-27 05:41:05.349 [INFO][4361] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.350 [INFO][4361] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.351 [INFO][4361] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.356 [INFO][4361] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.366 [INFO][4361] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.196/26] block=192.168.36.192/26 handle="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.366 [INFO][4361] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.196/26] handle="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.366 [INFO][4361] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:05.394067 containerd[1684]: 2026-01-27 05:41:05.366 [INFO][4361] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.196/26] IPv6=[] ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" HandleID="k8s-pod-network.94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.394194 containerd[1684]: 2026-01-27 05:41:05.369 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0", GenerateName:"calico-apiserver-64987cbdf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64987cbdf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"calico-apiserver-64987cbdf8-thlkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie793e30fb89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.394246 containerd[1684]: 2026-01-27 05:41:05.369 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.196/32] ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.394246 containerd[1684]: 2026-01-27 05:41:05.369 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie793e30fb89 ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.394246 containerd[1684]: 2026-01-27 05:41:05.373 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.394307 containerd[1684]: 2026-01-27 05:41:05.374 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0", GenerateName:"calico-apiserver-64987cbdf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64987cbdf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c", Pod:"calico-apiserver-64987cbdf8-thlkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie793e30fb89", MAC:"5e:54:21:ff:72:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:05.394357 containerd[1684]: 2026-01-27 05:41:05.387 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" Namespace="calico-apiserver" Pod="calico-apiserver-64987cbdf8-thlkg" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--64987cbdf8--thlkg-eth0" Jan 27 05:41:05.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.399234 kernel: audit: type=1300 audit(1769492465.390:665): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.399284 kernel: audit: type=1327 audit(1769492465.390:665): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.390000 audit: BPF prog-id=217 op=UNLOAD Jan 27 05:41:05.390000 audit[4451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.409502 kernel: audit: type=1334 audit(1769492465.390:666): prog-id=217 op=UNLOAD Jan 27 05:41:05.409550 kernel: audit: type=1300 audit(1769492465.390:666): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.420028 kernel: audit: type=1327 audit(1769492465.390:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.390000 audit: BPF prog-id=218 op=LOAD Jan 27 05:41:05.390000 audit[4451]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.390000 audit: BPF prog-id=219 op=LOAD Jan 27 05:41:05.390000 audit[4451]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.390000 audit: BPF prog-id=219 op=UNLOAD Jan 27 05:41:05.390000 audit[4451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.392000 audit: BPF prog-id=218 op=UNLOAD Jan 27 05:41:05.392000 audit[4451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.392000 audit: BPF prog-id=220 op=LOAD Jan 27 05:41:05.392000 audit[4451]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4440 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434653236386230313436373834363364633863356238303338613938 Jan 27 05:41:05.423000 audit[4478]: NETFILTER_CFG table=filter:125 family=2 entries=45 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:05.423000 audit[4478]: SYSCALL arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7ffe3969bd10 a2=0 a3=7ffe3969bcfc items=0 ppid=4057 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.423000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:05.437997 containerd[1684]: time="2026-01-27T05:41:05.437760716Z" level=info msg="connecting to shim 94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c" address="unix:///run/containerd/s/0f6355e79db42c186e0b1f2bc9041498cc53abd0ad78f40e162f99747d956360" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:05.448581 containerd[1684]: time="2026-01-27T05:41:05.448555257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f4f5cc45-xxl57,Uid:a4123b97-2d89-42ef-9011-27c5f71176fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4e268b014678463dc8c5b8038a987f49e5cc66ded707ab4809a58349d162d75\"" Jan 27 05:41:05.472231 systemd[1]: Started cri-containerd-94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c.scope - libcontainer container 94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c. Jan 27 05:41:05.481000 audit: BPF prog-id=221 op=LOAD Jan 27 05:41:05.481000 audit: BPF prog-id=222 op=LOAD Jan 27 05:41:05.481000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.481000 audit: BPF prog-id=222 op=UNLOAD Jan 27 05:41:05.481000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.481000 audit: BPF prog-id=223 op=LOAD Jan 27 05:41:05.481000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.482000 audit: BPF prog-id=224 op=LOAD Jan 27 05:41:05.482000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.482000 audit: BPF prog-id=224 op=UNLOAD Jan 27 05:41:05.482000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.482000 audit: BPF prog-id=223 op=UNLOAD Jan 27 05:41:05.482000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.482000 audit: BPF prog-id=225 op=LOAD Jan 27 05:41:05.482000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4488 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:05.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934663332393364663933663530623736653461393465333335363639 Jan 27 05:41:05.528712 containerd[1684]: time="2026-01-27T05:41:05.528341740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64987cbdf8-thlkg,Uid:37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"94f3293df93f50b76e4a94e335669dbf68c2b2d167342fb425b390fcb968467c\"" Jan 27 05:41:05.662586 containerd[1684]: time="2026-01-27T05:41:05.662439911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:05.664901 containerd[1684]: time="2026-01-27T05:41:05.664875898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:05.665043 containerd[1684]: time="2026-01-27T05:41:05.664965891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:05.665294 kubelet[2893]: E0127 05:41:05.665241 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:05.665294 kubelet[2893]: E0127 05:41:05.665278 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:05.666000 kubelet[2893]: E0127 05:41:05.665758 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:05.666000 kubelet[2893]: E0127 05:41:05.665802 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:05.666404 containerd[1684]: time="2026-01-27T05:41:05.666354275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:41:05.976723 containerd[1684]: time="2026-01-27T05:41:05.976628271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zwfnm,Uid:39bafd82-e379-46da-b710-b960cdbdd540,Namespace:kube-system,Attempt:0,}" Jan 27 05:41:06.002397 containerd[1684]: time="2026-01-27T05:41:06.002255139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:06.004358 containerd[1684]: time="2026-01-27T05:41:06.004249631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:41:06.004358 containerd[1684]: time="2026-01-27T05:41:06.004332432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:06.005413 kubelet[2893]: E0127 05:41:06.004598 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:06.005413 kubelet[2893]: E0127 05:41:06.005171 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:06.005413 kubelet[2893]: E0127 05:41:06.005330 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:06.005413 kubelet[2893]: E0127 05:41:06.005367 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:06.005782 containerd[1684]: time="2026-01-27T05:41:06.005765595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:06.085243 systemd-networkd[1565]: cali49ffe3aea0d: Link UP Jan 27 05:41:06.085407 systemd-networkd[1565]: cali49ffe3aea0d: Gained carrier Jan 27 05:41:06.100503 containerd[1684]: 2026-01-27 05:41:06.028 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0 coredns-66bc5c9577- kube-system 39bafd82-e379-46da-b710-b960cdbdd540 852 0 2026-01-27 05:40:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df coredns-66bc5c9577-zwfnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49ffe3aea0d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-" Jan 27 05:41:06.100503 containerd[1684]: 2026-01-27 05:41:06.028 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.100503 containerd[1684]: 2026-01-27 05:41:06.050 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" HandleID="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.050 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" HandleID="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"coredns-66bc5c9577-zwfnm", "timestamp":"2026-01-27 05:41:06.050244529 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.050 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.050 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.050 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.057 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.061 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.064 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.066 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.100692 containerd[1684]: 2026-01-27 05:41:06.068 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.068 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.069 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.075 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.080 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.197/26] block=192.168.36.192/26 handle="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.080 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.197/26] handle="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.080 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:06.101303 containerd[1684]: 2026-01-27 05:41:06.080 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.197/26] IPv6=[] ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" HandleID="k8s-pod-network.1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.101437 containerd[1684]: 2026-01-27 05:41:06.082 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"39bafd82-e379-46da-b710-b960cdbdd540", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"coredns-66bc5c9577-zwfnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ffe3aea0d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:06.101437 containerd[1684]: 2026-01-27 05:41:06.082 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.197/32] ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.101437 containerd[1684]: 2026-01-27 05:41:06.082 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49ffe3aea0d ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.101437 containerd[1684]: 2026-01-27 05:41:06.085 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.101437 containerd[1684]: 2026-01-27 05:41:06.086 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"39bafd82-e379-46da-b710-b960cdbdd540", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b", Pod:"coredns-66bc5c9577-zwfnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49ffe3aea0d", MAC:"ce:16:79:13:da:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:06.102416 containerd[1684]: 2026-01-27 05:41:06.096 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" Namespace="kube-system" Pod="coredns-66bc5c9577-zwfnm" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--zwfnm-eth0" Jan 27 05:41:06.112000 audit[4559]: NETFILTER_CFG table=filter:126 family=2 entries=60 op=nft_register_chain pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:06.112000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7ffc456528f0 a2=0 a3=7ffc456528dc items=0 ppid=4057 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.112000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:06.129620 kubelet[2893]: E0127 05:41:06.128745 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:06.134311 kubelet[2893]: E0127 05:41:06.134240 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:06.139026 containerd[1684]: time="2026-01-27T05:41:06.138983151Z" level=info msg="connecting to shim 1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b" address="unix:///run/containerd/s/67d9161e72ad3ea74e37cd27a5abb67d8d2a38b86b2d22c9ff2747205bb73962" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:06.170205 systemd[1]: Started cri-containerd-1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b.scope - libcontainer container 1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b. Jan 27 05:41:06.175000 audit[4594]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:06.175000 audit[4594]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb3af84d0 a2=0 a3=7fffb3af84bc items=0 ppid=3018 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:06.179000 audit[4594]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:06.179000 audit[4594]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffb3af84d0 a2=0 a3=0 items=0 ppid=3018 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:06.181000 audit: BPF prog-id=226 op=LOAD Jan 27 05:41:06.181000 audit: BPF prog-id=227 op=LOAD Jan 27 05:41:06.181000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=227 op=UNLOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=228 op=LOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=229 op=LOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=229 op=UNLOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=228 op=UNLOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.182000 audit: BPF prog-id=230 op=LOAD Jan 27 05:41:06.182000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4569 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346165653634383365363061303432373530336432353161613462 Jan 27 05:41:06.218365 containerd[1684]: time="2026-01-27T05:41:06.218324946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zwfnm,Uid:39bafd82-e379-46da-b710-b960cdbdd540,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b\"" Jan 27 05:41:06.224374 containerd[1684]: time="2026-01-27T05:41:06.224344718Z" level=info msg="CreateContainer within sandbox \"1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 05:41:06.253082 containerd[1684]: time="2026-01-27T05:41:06.251956777Z" level=info msg="Container 7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:41:06.257212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378415280.mount: Deactivated successfully. Jan 27 05:41:06.267073 containerd[1684]: time="2026-01-27T05:41:06.267040819Z" level=info msg="CreateContainer within sandbox \"1f4aee6483e60a0427503d251aa4bcb7d4d18c944526dd7c8141f8d8854b008b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a\"" Jan 27 05:41:06.268294 containerd[1684]: time="2026-01-27T05:41:06.268260577Z" level=info msg="StartContainer for \"7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a\"" Jan 27 05:41:06.269698 containerd[1684]: time="2026-01-27T05:41:06.268994360Z" level=info msg="connecting to shim 7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a" address="unix:///run/containerd/s/67d9161e72ad3ea74e37cd27a5abb67d8d2a38b86b2d22c9ff2747205bb73962" protocol=ttrpc version=3 Jan 27 05:41:06.290275 systemd[1]: Started cri-containerd-7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a.scope - libcontainer container 7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a. Jan 27 05:41:06.301000 audit: BPF prog-id=231 op=LOAD Jan 27 05:41:06.301000 audit: BPF prog-id=232 op=LOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=232 op=UNLOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=233 op=LOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=234 op=LOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=234 op=UNLOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=233 op=UNLOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.301000 audit: BPF prog-id=235 op=LOAD Jan 27 05:41:06.301000 audit[4608]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4569 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:06.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333532393431633239363337643262346139333831346662333563 Jan 27 05:41:06.321807 containerd[1684]: time="2026-01-27T05:41:06.321772219Z" level=info msg="StartContainer for \"7b352941c29637d2b4a93814fb35cb110fbd32d65f3bc1c6bed85cd0d2ff3c6a\" returns successfully" Jan 27 05:41:06.328164 systemd-networkd[1565]: cali3e6df4cbaca: Gained IPv6LL Jan 27 05:41:06.354189 containerd[1684]: time="2026-01-27T05:41:06.354087260Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:06.356405 containerd[1684]: time="2026-01-27T05:41:06.356284877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:06.356568 containerd[1684]: time="2026-01-27T05:41:06.356351242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:06.356900 kubelet[2893]: E0127 05:41:06.356693 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:06.356900 kubelet[2893]: E0127 05:41:06.356735 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:06.356900 kubelet[2893]: E0127 05:41:06.356815 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:06.356900 kubelet[2893]: E0127 05:41:06.356841 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:06.776302 systemd-networkd[1565]: cali42069ca8335: Gained IPv6LL Jan 27 05:41:06.776822 systemd-networkd[1565]: calie793e30fb89: Gained IPv6LL Jan 27 05:41:07.139070 kubelet[2893]: E0127 05:41:07.138724 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:07.140024 kubelet[2893]: E0127 05:41:07.139655 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:07.140024 kubelet[2893]: E0127 05:41:07.139714 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:07.153027 kubelet[2893]: I0127 05:41:07.152719 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zwfnm" podStartSLOduration=37.152703806 podStartE2EDuration="37.152703806s" podCreationTimestamp="2026-01-27 05:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:41:07.150877702 +0000 UTC m=+44.280785359" watchObservedRunningTime="2026-01-27 05:41:07.152703806 +0000 UTC m=+44.282611472" Jan 27 05:41:07.206000 audit[4642]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:07.206000 audit[4642]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcebd48920 a2=0 a3=7ffcebd4890c items=0 ppid=3018 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:07.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:07.212000 audit[4642]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:07.212000 audit[4642]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcebd48920 a2=0 a3=0 items=0 ppid=3018 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:07.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:07.545225 systemd-networkd[1565]: cali49ffe3aea0d: Gained IPv6LL Jan 27 05:41:07.979522 containerd[1684]: time="2026-01-27T05:41:07.979263307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nvkh6,Uid:df8cdf47-c52e-4d93-8feb-b0f69767774a,Namespace:kube-system,Attempt:0,}" Jan 27 05:41:08.103132 systemd-networkd[1565]: calid2f752b5061: Link UP Jan 27 05:41:08.103569 systemd-networkd[1565]: calid2f752b5061: Gained carrier Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.037 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0 coredns-66bc5c9577- kube-system df8cdf47-c52e-4d93-8feb-b0f69767774a 843 0 2026-01-27 05:40:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df coredns-66bc5c9577-nvkh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2f752b5061 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.037 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.065 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" HandleID="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.065 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" HandleID="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"coredns-66bc5c9577-nvkh6", "timestamp":"2026-01-27 05:41:08.065404523 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.065 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.065 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.065 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.073 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.077 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.081 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.083 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.085 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.085 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.086 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.091 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.098 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.198/26] block=192.168.36.192/26 handle="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.098 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.198/26] handle="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.098 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:08.123332 containerd[1684]: 2026-01-27 05:41:08.098 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.198/26] IPv6=[] ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" HandleID="k8s-pod-network.690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Workload="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123890 containerd[1684]: 2026-01-27 05:41:08.100 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df8cdf47-c52e-4d93-8feb-b0f69767774a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"coredns-66bc5c9577-nvkh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2f752b5061", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:08.123890 containerd[1684]: 2026-01-27 05:41:08.100 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.198/32] ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123890 containerd[1684]: 2026-01-27 05:41:08.100 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2f752b5061 ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123890 containerd[1684]: 2026-01-27 05:41:08.107 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.123890 containerd[1684]: 2026-01-27 05:41:08.108 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df8cdf47-c52e-4d93-8feb-b0f69767774a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d", Pod:"coredns-66bc5c9577-nvkh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2f752b5061", MAC:"06:4b:36:b0:48:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:08.124787 containerd[1684]: 2026-01-27 05:41:08.120 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" Namespace="kube-system" Pod="coredns-66bc5c9577-nvkh6" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-coredns--66bc5c9577--nvkh6-eth0" Jan 27 05:41:08.136000 audit[4676]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:08.136000 audit[4676]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7fffb7daa280 a2=0 a3=7fffb7daa26c items=0 ppid=4057 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.136000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:08.152095 containerd[1684]: time="2026-01-27T05:41:08.152030389Z" level=info msg="connecting to shim 690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d" address="unix:///run/containerd/s/9e5a1ca10ea828a000c07dd968a37ab5e396856f94081a171c3000918cf0e187" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:08.183255 systemd[1]: Started cri-containerd-690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d.scope - libcontainer container 690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d. Jan 27 05:41:08.197000 audit: BPF prog-id=236 op=LOAD Jan 27 05:41:08.198000 audit: BPF prog-id=237 op=LOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.198000 audit: BPF prog-id=237 op=UNLOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.198000 audit: BPF prog-id=238 op=LOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.198000 audit: BPF prog-id=239 op=LOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.198000 audit: BPF prog-id=239 op=UNLOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.198000 audit: BPF prog-id=238 op=UNLOAD Jan 27 05:41:08.198000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.199000 audit: BPF prog-id=240 op=LOAD Jan 27 05:41:08.199000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4684 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639306362643763643239653036616166646430363335636139633931 Jan 27 05:41:08.235000 audit[4725]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:08.235000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc1914850 a2=0 a3=7ffdc191483c items=0 ppid=3018 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:08.240332 containerd[1684]: time="2026-01-27T05:41:08.240273658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nvkh6,Uid:df8cdf47-c52e-4d93-8feb-b0f69767774a,Namespace:kube-system,Attempt:0,} returns sandbox id \"690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d\"" Jan 27 05:41:08.242000 audit[4725]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:08.242000 audit[4725]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdc1914850 a2=0 a3=7ffdc191483c items=0 ppid=3018 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:08.246294 containerd[1684]: time="2026-01-27T05:41:08.246262277Z" level=info msg="CreateContainer within sandbox \"690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 05:41:08.263955 containerd[1684]: time="2026-01-27T05:41:08.263470313Z" level=info msg="Container 1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:41:08.267999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107787076.mount: Deactivated successfully. Jan 27 05:41:08.275002 containerd[1684]: time="2026-01-27T05:41:08.274960305Z" level=info msg="CreateContainer within sandbox \"690cbd7cd29e06aafdd0635ca9c91dd4469bc0cb469a5a89972e23ab2b87c37d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b\"" Jan 27 05:41:08.275879 containerd[1684]: time="2026-01-27T05:41:08.275814373Z" level=info msg="StartContainer for \"1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b\"" Jan 27 05:41:08.276782 containerd[1684]: time="2026-01-27T05:41:08.276764142Z" level=info msg="connecting to shim 1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b" address="unix:///run/containerd/s/9e5a1ca10ea828a000c07dd968a37ab5e396856f94081a171c3000918cf0e187" protocol=ttrpc version=3 Jan 27 05:41:08.305237 systemd[1]: Started cri-containerd-1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b.scope - libcontainer container 1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b. Jan 27 05:41:08.316000 audit: BPF prog-id=241 op=LOAD Jan 27 05:41:08.316000 audit: BPF prog-id=242 op=LOAD Jan 27 05:41:08.316000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.316000 audit: BPF prog-id=242 op=UNLOAD Jan 27 05:41:08.316000 audit[4726]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.317000 audit: BPF prog-id=243 op=LOAD Jan 27 05:41:08.317000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.317000 audit: BPF prog-id=244 op=LOAD Jan 27 05:41:08.317000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.317000 audit: BPF prog-id=244 op=UNLOAD Jan 27 05:41:08.317000 audit[4726]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.317000 audit: BPF prog-id=243 op=UNLOAD Jan 27 05:41:08.317000 audit[4726]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.317000 audit: BPF prog-id=245 op=LOAD Jan 27 05:41:08.317000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4684 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:08.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161313939393939323331323835333039376537333532363538666238 Jan 27 05:41:08.343315 containerd[1684]: time="2026-01-27T05:41:08.343274525Z" level=info msg="StartContainer for \"1a1999992312853097e7352658fb8ab0ae06155fa608fcd4e1393421ab801c4b\" returns successfully" Jan 27 05:41:09.152042 kubelet[2893]: I0127 05:41:09.151663 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nvkh6" podStartSLOduration=39.151643685 podStartE2EDuration="39.151643685s" podCreationTimestamp="2026-01-27 05:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 05:41:09.151338877 +0000 UTC m=+46.281246583" watchObservedRunningTime="2026-01-27 05:41:09.151643685 +0000 UTC m=+46.281551391" Jan 27 05:41:09.257000 audit[4760]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:09.257000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdeea94fc0 a2=0 a3=7ffdeea94fac items=0 ppid=3018 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:09.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:09.263000 audit[4760]: NETFILTER_CFG table=nat:135 family=2 entries=44 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:09.263000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdeea94fc0 a2=0 a3=7ffdeea94fac items=0 ppid=3018 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:09.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:09.656320 systemd-networkd[1565]: calid2f752b5061: Gained IPv6LL Jan 27 05:41:09.979724 containerd[1684]: time="2026-01-27T05:41:09.979427242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4f587895-dm4v4,Uid:5de8dbc5-cf50-4a41-99c0-153b9e80ac79,Namespace:calico-apiserver,Attempt:0,}" Jan 27 05:41:09.986619 containerd[1684]: time="2026-01-27T05:41:09.986520666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt29m,Uid:2fd25125-023e-4bbf-9ed8-e267fcf6bfb3,Namespace:calico-system,Attempt:0,}" Jan 27 05:41:09.986739 containerd[1684]: time="2026-01-27T05:41:09.986633588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4t22j,Uid:e8248550-dadc-499c-aab6-b47350ead3d7,Namespace:calico-system,Attempt:0,}" Jan 27 05:41:10.144590 systemd-networkd[1565]: calie36c2e96644: Link UP Jan 27 05:41:10.146300 systemd-networkd[1565]: calie36c2e96644: Gained carrier Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.048 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0 calico-apiserver-5b4f587895- calico-apiserver 5de8dbc5-cf50-4a41-99c0-153b9e80ac79 847 0 2026-01-27 05:40:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b4f587895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df calico-apiserver-5b4f587895-dm4v4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie36c2e96644 [] [] }} ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.048 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.095 [INFO][4798] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" HandleID="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.096 [INFO][4798] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" HandleID="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"calico-apiserver-5b4f587895-dm4v4", "timestamp":"2026-01-27 05:41:10.095721708 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.096 [INFO][4798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.096 [INFO][4798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.096 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.106 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.113 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.118 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.120 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.122 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.122 [INFO][4798] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.123 [INFO][4798] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463 Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.127 [INFO][4798] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4798] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.199/26] block=192.168.36.192/26 handle="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.199/26] handle="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:10.164893 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4798] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.199/26] IPv6=[] ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" HandleID="k8s-pod-network.d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Workload="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.136 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0", GenerateName:"calico-apiserver-5b4f587895-", Namespace:"calico-apiserver", SelfLink:"", UID:"5de8dbc5-cf50-4a41-99c0-153b9e80ac79", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4f587895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"calico-apiserver-5b4f587895-dm4v4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie36c2e96644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.136 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.199/32] ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.136 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie36c2e96644 ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.147 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.148 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0", GenerateName:"calico-apiserver-5b4f587895-", Namespace:"calico-apiserver", SelfLink:"", UID:"5de8dbc5-cf50-4a41-99c0-153b9e80ac79", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b4f587895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463", Pod:"calico-apiserver-5b4f587895-dm4v4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie36c2e96644", MAC:"ee:81:1d:ac:40:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.166700 containerd[1684]: 2026-01-27 05:41:10.159 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" Namespace="calico-apiserver" Pod="calico-apiserver-5b4f587895-dm4v4" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-calico--apiserver--5b4f587895--dm4v4-eth0" Jan 27 05:41:10.201338 containerd[1684]: time="2026-01-27T05:41:10.201295537Z" level=info msg="connecting to shim d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463" address="unix:///run/containerd/s/897f71dee070020e791f7587fd02131fe7285fc138f20f45c52b1a091b2d0201" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:10.200000 audit[4841]: NETFILTER_CFG table=filter:136 family=2 entries=59 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:10.200000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=29476 a0=3 a1=7fff44c8f320 a2=0 a3=7fff44c8f30c items=0 ppid=4057 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.200000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:10.237327 systemd[1]: Started cri-containerd-d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463.scope - libcontainer container d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463. Jan 27 05:41:10.261000 audit: BPF prog-id=246 op=LOAD Jan 27 05:41:10.261000 audit: BPF prog-id=247 op=LOAD Jan 27 05:41:10.261000 audit[4852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.262000 audit: BPF prog-id=247 op=UNLOAD Jan 27 05:41:10.263571 systemd-networkd[1565]: calic1dba3aec9e: Link UP Jan 27 05:41:10.262000 audit[4852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.263000 audit: BPF prog-id=248 op=LOAD Jan 27 05:41:10.263000 audit[4852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.265298 systemd-networkd[1565]: calic1dba3aec9e: Gained carrier Jan 27 05:41:10.265000 audit: BPF prog-id=249 op=LOAD Jan 27 05:41:10.265000 audit[4852]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.265000 audit: BPF prog-id=249 op=UNLOAD Jan 27 05:41:10.265000 audit[4852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.266000 audit: BPF prog-id=248 op=UNLOAD Jan 27 05:41:10.266000 audit[4852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.267000 audit: BPF prog-id=250 op=LOAD Jan 27 05:41:10.267000 audit[4852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4839 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432306566666332636136366162303261353439343365613137313539 Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.079 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0 goldmane-7c778bb748- calico-system e8248550-dadc-499c-aab6-b47350ead3d7 850 0 2026-01-27 05:40:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df goldmane-7c778bb748-4t22j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic1dba3aec9e [] [] }} ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.080 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.116 [INFO][4806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" HandleID="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Workload="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.116 [INFO][4806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" HandleID="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Workload="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"goldmane-7c778bb748-4t22j", "timestamp":"2026-01-27 05:41:10.116248558 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.116 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.133 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.207 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.219 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.225 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.228 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.231 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.231 [INFO][4806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.233 [INFO][4806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.242 [INFO][4806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.253 [INFO][4806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.200/26] block=192.168.36.192/26 handle="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.254 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.200/26] handle="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.254 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:10.283179 containerd[1684]: 2026-01-27 05:41:10.254 [INFO][4806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.200/26] IPv6=[] ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" HandleID="k8s-pod-network.1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Workload="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.258 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e8248550-dadc-499c-aab6-b47350ead3d7", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"goldmane-7c778bb748-4t22j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1dba3aec9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.258 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.200/32] ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.258 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1dba3aec9e ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.269 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.272 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e8248550-dadc-499c-aab6-b47350ead3d7", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a", Pod:"goldmane-7c778bb748-4t22j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1dba3aec9e", MAC:"7a:c0:62:9b:ea:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.284431 containerd[1684]: 2026-01-27 05:41:10.280 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" Namespace="calico-system" Pod="goldmane-7c778bb748-4t22j" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-goldmane--7c778bb748--4t22j-eth0" Jan 27 05:41:10.292000 audit[4881]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:10.292000 audit[4881]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7ce22200 a2=0 a3=7fff7ce221ec items=0 ppid=3018 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:10.321727 containerd[1684]: time="2026-01-27T05:41:10.321612174Z" level=info msg="connecting to shim 1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a" address="unix:///run/containerd/s/4bcbf22a61a96a991063fe038718ebf62dd1dc2aad790b5651d3d985f9ce2ecb" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:10.337316 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 27 05:41:10.337419 kernel: audit: type=1325 audit(1769492470.320:733): table=nat:138 family=2 entries=56 op=nft_register_chain pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:10.320000 audit[4881]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:10.344662 kernel: audit: type=1300 audit(1769492470.320:733): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff7ce22200 a2=0 a3=7fff7ce221ec items=0 ppid=3018 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.320000 audit[4881]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff7ce22200 a2=0 a3=7fff7ce221ec items=0 ppid=3018 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:10.350110 kernel: audit: type=1327 audit(1769492470.320:733): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:10.378223 systemd[1]: Started cri-containerd-1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a.scope - libcontainer container 1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a. Jan 27 05:41:10.381086 systemd-networkd[1565]: cali7640955a829: Link UP Jan 27 05:41:10.382462 systemd-networkd[1565]: cali7640955a829: Gained carrier Jan 27 05:41:10.397229 containerd[1684]: time="2026-01-27T05:41:10.397125111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b4f587895-dm4v4,Uid:5de8dbc5-cf50-4a41-99c0-153b9e80ac79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d20effc2ca66ab02a54943ea171594d748b8df6a461407149e717fa6c450b463\"" Jan 27 05:41:10.400265 containerd[1684]: time="2026-01-27T05:41:10.400147466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.079 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0 csi-node-driver- calico-system 2fd25125-023e-4bbf-9ed8-e267fcf6bfb3 744 0 2026-01-27 05:40:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4592-0-0-n-5ca0d578df csi-node-driver-gt29m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7640955a829 [] [] }} ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.080 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.118 [INFO][4807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" HandleID="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Workload="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.119 [INFO][4807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" HandleID="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Workload="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4592-0-0-n-5ca0d578df", "pod":"csi-node-driver-gt29m", "timestamp":"2026-01-27 05:41:10.118903283 +0000 UTC"}, Hostname:"ci-4592-0-0-n-5ca0d578df", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.119 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.254 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.254 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4592-0-0-n-5ca0d578df' Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.309 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.318 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.325 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.328 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.332 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.332 [INFO][4807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.337 [INFO][4807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650 Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.347 [INFO][4807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.362 [INFO][4807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.201/26] block=192.168.36.192/26 handle="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.362 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.201/26] handle="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" host="ci-4592-0-0-n-5ca0d578df" Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.362 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 05:41:10.411184 containerd[1684]: 2026-01-27 05:41:10.362 [INFO][4807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.201/26] IPv6=[] ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" HandleID="k8s-pod-network.f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Workload="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.368 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"", Pod:"csi-node-driver-gt29m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7640955a829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.369 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.201/32] ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.369 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7640955a829 ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.387 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.388 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd25125-023e-4bbf-9ed8-e267fcf6bfb3", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 5, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4592-0-0-n-5ca0d578df", ContainerID:"f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650", Pod:"csi-node-driver-gt29m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7640955a829", MAC:"5e:19:ef:31:0e:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 05:41:10.412362 containerd[1684]: 2026-01-27 05:41:10.408 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" Namespace="calico-system" Pod="csi-node-driver-gt29m" WorkloadEndpoint="ci--4592--0--0--n--5ca0d578df-k8s-csi--node--driver--gt29m-eth0" Jan 27 05:41:10.421355 kernel: audit: type=1325 audit(1769492470.417:734): table=filter:139 family=2 entries=66 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:10.417000 audit[4934]: NETFILTER_CFG table=filter:139 family=2 entries=66 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:10.417000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=32752 a0=3 a1=7fff35067450 a2=0 a3=7fff3506743c items=0 ppid=4057 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.427028 kernel: audit: type=1300 audit(1769492470.417:734): arch=c000003e syscall=46 success=yes exit=32752 a0=3 a1=7fff35067450 a2=0 a3=7fff3506743c items=0 ppid=4057 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.417000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:10.430079 kernel: audit: type=1327 audit(1769492470.417:734): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:10.429000 audit: BPF prog-id=251 op=LOAD Jan 27 05:41:10.438041 kernel: audit: type=1334 audit(1769492470.429:735): prog-id=251 op=LOAD Jan 27 05:41:10.432000 audit: BPF prog-id=252 op=LOAD Jan 27 05:41:10.440024 kernel: audit: type=1334 audit(1769492470.432:736): prog-id=252 op=LOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.444023 kernel: audit: type=1300 audit(1769492470.432:736): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.446945 containerd[1684]: time="2026-01-27T05:41:10.446918653Z" level=info msg="connecting to shim f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650" address="unix:///run/containerd/s/2e56a1aefeb1678b7c13b25707e487b4a8ce32fa29ed1ad15c28cae875ad5adc" namespace=k8s.io protocol=ttrpc version=3 Jan 27 05:41:10.451029 kernel: audit: type=1327 audit(1769492470.432:736): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=252 op=UNLOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=253 op=LOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=254 op=LOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=254 op=UNLOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=253 op=UNLOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.432000 audit: BPF prog-id=255 op=LOAD Jan 27 05:41:10.432000 audit[4904]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4893 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164316232333066363565323135656235393161306233643735633162 Jan 27 05:41:10.459000 audit[4961]: NETFILTER_CFG table=filter:140 family=2 entries=52 op=nft_register_chain pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 05:41:10.459000 audit[4961]: SYSCALL arch=c000003e syscall=46 success=yes exit=24280 a0=3 a1=7fff5538dec0 a2=0 a3=7fff5538deac items=0 ppid=4057 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.459000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 05:41:10.481331 systemd[1]: Started cri-containerd-f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650.scope - libcontainer container f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650. Jan 27 05:41:10.488111 containerd[1684]: time="2026-01-27T05:41:10.487717628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4t22j,Uid:e8248550-dadc-499c-aab6-b47350ead3d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d1b230f65e215eb591a0b3d75c1b7ad957b275aec46b397e91d45414ac4e36a\"" Jan 27 05:41:10.497000 audit: BPF prog-id=256 op=LOAD Jan 27 05:41:10.497000 audit: BPF prog-id=257 op=LOAD Jan 27 05:41:10.497000 audit[4960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.497000 audit: BPF prog-id=257 op=UNLOAD Jan 27 05:41:10.497000 audit[4960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.498000 audit: BPF prog-id=258 op=LOAD Jan 27 05:41:10.498000 audit[4960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.498000 audit: BPF prog-id=259 op=LOAD Jan 27 05:41:10.498000 audit[4960]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.498000 audit: BPF prog-id=259 op=UNLOAD Jan 27 05:41:10.498000 audit[4960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.498000 audit: BPF prog-id=258 op=UNLOAD Jan 27 05:41:10.498000 audit[4960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.498000 audit: BPF prog-id=260 op=LOAD Jan 27 05:41:10.498000 audit[4960]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4949 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:10.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613665343432623032393965633831356665336639623363666564 Jan 27 05:41:10.514063 containerd[1684]: time="2026-01-27T05:41:10.513948603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gt29m,Uid:2fd25125-023e-4bbf-9ed8-e267fcf6bfb3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2a6e442b0299ec815fe3f9b3cfed4712de3f4db6ff41c795519e7defb249650\"" Jan 27 05:41:10.738104 containerd[1684]: time="2026-01-27T05:41:10.738050865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:10.740285 containerd[1684]: time="2026-01-27T05:41:10.739758947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:10.740285 containerd[1684]: time="2026-01-27T05:41:10.739831864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:10.743288 kubelet[2893]: E0127 05:41:10.743243 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:10.743288 kubelet[2893]: E0127 05:41:10.743287 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:10.743705 kubelet[2893]: E0127 05:41:10.743373 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:10.743705 kubelet[2893]: E0127 05:41:10.743410 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:10.744230 containerd[1684]: time="2026-01-27T05:41:10.743918606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:41:11.088979 containerd[1684]: time="2026-01-27T05:41:11.088906668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:11.094401 containerd[1684]: time="2026-01-27T05:41:11.094310394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:41:11.094490 containerd[1684]: time="2026-01-27T05:41:11.094389355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:11.094656 kubelet[2893]: E0127 05:41:11.094622 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:11.094737 kubelet[2893]: E0127 05:41:11.094726 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:11.095019 kubelet[2893]: E0127 05:41:11.094940 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:11.095019 kubelet[2893]: E0127 05:41:11.094980 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:11.095331 containerd[1684]: time="2026-01-27T05:41:11.095187706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:41:11.146266 kubelet[2893]: E0127 05:41:11.146133 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:11.147581 kubelet[2893]: E0127 05:41:11.147554 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:11.401000 audit[4995]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:11.401000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffa8748c0 a2=0 a3=7ffffa8748ac items=0 ppid=3018 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:11.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:11.406000 audit[4995]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:41:11.406000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffa8748c0 a2=0 a3=7ffffa8748ac items=0 ppid=3018 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:41:11.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:41:11.425194 containerd[1684]: time="2026-01-27T05:41:11.425153672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:11.426979 containerd[1684]: time="2026-01-27T05:41:11.426938001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:41:11.427211 containerd[1684]: time="2026-01-27T05:41:11.427031206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:11.427251 kubelet[2893]: E0127 05:41:11.427216 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:11.427287 kubelet[2893]: E0127 05:41:11.427259 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:11.427341 kubelet[2893]: E0127 05:41:11.427327 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:11.431249 containerd[1684]: time="2026-01-27T05:41:11.430671092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:41:11.771851 containerd[1684]: time="2026-01-27T05:41:11.771784858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:11.774463 containerd[1684]: time="2026-01-27T05:41:11.774408994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:41:11.774676 containerd[1684]: time="2026-01-27T05:41:11.774503422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:11.775191 kubelet[2893]: E0127 05:41:11.775119 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:11.776348 kubelet[2893]: E0127 05:41:11.775906 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:11.776348 kubelet[2893]: E0127 05:41:11.776156 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:11.776348 kubelet[2893]: E0127 05:41:11.776250 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:41:11.896334 systemd-networkd[1565]: calie36c2e96644: Gained IPv6LL Jan 27 05:41:11.897740 systemd-networkd[1565]: calic1dba3aec9e: Gained IPv6LL Jan 27 05:41:12.152631 kubelet[2893]: E0127 05:41:12.152411 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:12.152631 kubelet[2893]: E0127 05:41:12.152423 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:12.153864 kubelet[2893]: E0127 05:41:12.153834 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:41:12.216980 systemd-networkd[1565]: cali7640955a829: Gained IPv6LL Jan 27 05:41:14.976820 containerd[1684]: time="2026-01-27T05:41:14.976616172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:41:15.318410 containerd[1684]: time="2026-01-27T05:41:15.318365594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:15.320065 containerd[1684]: time="2026-01-27T05:41:15.320036130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:41:15.320151 containerd[1684]: time="2026-01-27T05:41:15.320101956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:15.320310 kubelet[2893]: E0127 05:41:15.320274 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:15.321275 kubelet[2893]: E0127 05:41:15.320315 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:15.321275 kubelet[2893]: E0127 05:41:15.320389 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:15.321466 containerd[1684]: time="2026-01-27T05:41:15.321449951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:41:15.644754 containerd[1684]: time="2026-01-27T05:41:15.644654355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:15.646876 containerd[1684]: time="2026-01-27T05:41:15.646825132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:41:15.647037 containerd[1684]: time="2026-01-27T05:41:15.646902638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:15.647263 kubelet[2893]: E0127 05:41:15.647227 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:15.647308 kubelet[2893]: E0127 05:41:15.647276 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:15.647361 kubelet[2893]: E0127 05:41:15.647344 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:15.647405 kubelet[2893]: E0127 05:41:15.647383 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:17.976681 containerd[1684]: time="2026-01-27T05:41:17.976595346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:41:18.316141 containerd[1684]: time="2026-01-27T05:41:18.316057508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:18.318870 containerd[1684]: time="2026-01-27T05:41:18.318241225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:41:18.318870 containerd[1684]: time="2026-01-27T05:41:18.318391058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:18.319208 kubelet[2893]: E0127 05:41:18.319089 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:18.319208 kubelet[2893]: E0127 05:41:18.319181 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:18.319891 kubelet[2893]: E0127 05:41:18.319319 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:18.319891 kubelet[2893]: E0127 05:41:18.319383 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:21.974721 containerd[1684]: time="2026-01-27T05:41:21.974682027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:22.306231 containerd[1684]: time="2026-01-27T05:41:22.306060334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:22.308377 containerd[1684]: time="2026-01-27T05:41:22.308286706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:22.308497 containerd[1684]: time="2026-01-27T05:41:22.308461210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:22.308673 kubelet[2893]: E0127 05:41:22.308649 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:22.309512 kubelet[2893]: E0127 05:41:22.308957 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:22.309512 kubelet[2893]: E0127 05:41:22.309140 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:22.309512 kubelet[2893]: E0127 05:41:22.309170 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:22.309722 containerd[1684]: time="2026-01-27T05:41:22.309701295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:22.635915 containerd[1684]: time="2026-01-27T05:41:22.635806195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:22.639529 containerd[1684]: time="2026-01-27T05:41:22.639495595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:22.639653 containerd[1684]: time="2026-01-27T05:41:22.639567268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:22.639859 kubelet[2893]: E0127 05:41:22.639826 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:22.639912 kubelet[2893]: E0127 05:41:22.639866 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:22.639951 kubelet[2893]: E0127 05:41:22.639934 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:22.639984 kubelet[2893]: E0127 05:41:22.639963 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:22.977255 containerd[1684]: time="2026-01-27T05:41:22.976726862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:41:23.302987 containerd[1684]: time="2026-01-27T05:41:23.302927849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:23.304714 containerd[1684]: time="2026-01-27T05:41:23.304667265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:41:23.305060 containerd[1684]: time="2026-01-27T05:41:23.304746251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:23.305105 kubelet[2893]: E0127 05:41:23.304882 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:23.305105 kubelet[2893]: E0127 05:41:23.304920 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:23.305105 kubelet[2893]: E0127 05:41:23.304988 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:23.306484 containerd[1684]: time="2026-01-27T05:41:23.306445915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:41:23.646437 containerd[1684]: time="2026-01-27T05:41:23.645900796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:23.647803 containerd[1684]: time="2026-01-27T05:41:23.647752599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:41:23.647929 containerd[1684]: time="2026-01-27T05:41:23.647853386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:23.648067 kubelet[2893]: E0127 05:41:23.648030 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:23.648437 kubelet[2893]: E0127 05:41:23.648083 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:23.648437 kubelet[2893]: E0127 05:41:23.648172 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:23.649146 kubelet[2893]: E0127 05:41:23.649048 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:41:23.975705 containerd[1684]: time="2026-01-27T05:41:23.975413221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:41:24.303393 containerd[1684]: time="2026-01-27T05:41:24.302963157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:24.304915 containerd[1684]: time="2026-01-27T05:41:24.304818826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:41:24.304915 containerd[1684]: time="2026-01-27T05:41:24.304884356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:24.305085 kubelet[2893]: E0127 05:41:24.305039 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:24.305148 kubelet[2893]: E0127 05:41:24.305096 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:24.305237 kubelet[2893]: E0127 05:41:24.305207 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:24.305276 kubelet[2893]: E0127 05:41:24.305247 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:24.977119 containerd[1684]: time="2026-01-27T05:41:24.976964634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:25.326715 containerd[1684]: time="2026-01-27T05:41:25.326606419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:25.329100 containerd[1684]: time="2026-01-27T05:41:25.329018603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:25.329100 containerd[1684]: time="2026-01-27T05:41:25.329065086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:25.329267 kubelet[2893]: E0127 05:41:25.329237 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:25.329482 kubelet[2893]: E0127 05:41:25.329281 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:25.329482 kubelet[2893]: E0127 05:41:25.329359 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:25.329482 kubelet[2893]: E0127 05:41:25.329406 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:28.979074 kubelet[2893]: E0127 05:41:28.979003 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:28.980449 kubelet[2893]: E0127 05:41:28.980306 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:32.977599 kubelet[2893]: E0127 05:41:32.977131 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:32.977599 kubelet[2893]: E0127 05:41:32.977536 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:34.975733 kubelet[2893]: E0127 05:41:34.975364 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:37.978558 kubelet[2893]: E0127 05:41:37.978395 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:41:39.974233 kubelet[2893]: E0127 05:41:39.974190 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:41.975437 containerd[1684]: time="2026-01-27T05:41:41.974990475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:41:42.307564 containerd[1684]: time="2026-01-27T05:41:42.307389691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:42.309443 containerd[1684]: time="2026-01-27T05:41:42.309263514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:41:42.309443 containerd[1684]: time="2026-01-27T05:41:42.309358270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:42.309719 kubelet[2893]: E0127 05:41:42.309559 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:42.309719 kubelet[2893]: E0127 05:41:42.309604 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:41:42.309719 kubelet[2893]: E0127 05:41:42.309680 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:42.311084 containerd[1684]: time="2026-01-27T05:41:42.310792128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:41:42.640085 containerd[1684]: time="2026-01-27T05:41:42.639954126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:42.641684 containerd[1684]: time="2026-01-27T05:41:42.641652738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:41:42.641829 containerd[1684]: time="2026-01-27T05:41:42.641721069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:42.641901 kubelet[2893]: E0127 05:41:42.641848 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:42.641901 kubelet[2893]: E0127 05:41:42.641892 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:41:42.642153 kubelet[2893]: E0127 05:41:42.642126 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:42.642199 kubelet[2893]: E0127 05:41:42.642171 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:43.975901 containerd[1684]: time="2026-01-27T05:41:43.975761648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:41:44.306130 containerd[1684]: time="2026-01-27T05:41:44.306092992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:44.307858 containerd[1684]: time="2026-01-27T05:41:44.307826170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:41:44.307937 containerd[1684]: time="2026-01-27T05:41:44.307893327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:44.308059 kubelet[2893]: E0127 05:41:44.308030 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:44.308485 kubelet[2893]: E0127 05:41:44.308069 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:41:44.308485 kubelet[2893]: E0127 05:41:44.308134 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:44.308485 kubelet[2893]: E0127 05:41:44.308162 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:46.976243 containerd[1684]: time="2026-01-27T05:41:46.976200705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:47.317954 containerd[1684]: time="2026-01-27T05:41:47.317884242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:47.319680 containerd[1684]: time="2026-01-27T05:41:47.319644129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:47.319847 containerd[1684]: time="2026-01-27T05:41:47.319718673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:47.320021 kubelet[2893]: E0127 05:41:47.319964 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:47.320486 kubelet[2893]: E0127 05:41:47.320005 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:47.320525 kubelet[2893]: E0127 05:41:47.320495 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:47.321178 kubelet[2893]: E0127 05:41:47.320528 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:47.321218 containerd[1684]: time="2026-01-27T05:41:47.320680051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:41:47.660943 containerd[1684]: time="2026-01-27T05:41:47.660590141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:47.662458 containerd[1684]: time="2026-01-27T05:41:47.662418670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:41:47.662812 containerd[1684]: time="2026-01-27T05:41:47.662499133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:47.663871 kubelet[2893]: E0127 05:41:47.662914 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:47.663871 kubelet[2893]: E0127 05:41:47.662953 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:41:47.663871 kubelet[2893]: E0127 05:41:47.663038 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:47.663871 kubelet[2893]: E0127 05:41:47.663067 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:41:47.976959 containerd[1684]: time="2026-01-27T05:41:47.976417998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:48.304070 containerd[1684]: time="2026-01-27T05:41:48.303721383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:48.306093 containerd[1684]: time="2026-01-27T05:41:48.306063315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:48.306141 containerd[1684]: time="2026-01-27T05:41:48.306133502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:48.306546 kubelet[2893]: E0127 05:41:48.306507 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:48.306633 kubelet[2893]: E0127 05:41:48.306621 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:48.306779 kubelet[2893]: E0127 05:41:48.306757 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:48.306884 kubelet[2893]: E0127 05:41:48.306855 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:41:52.978024 containerd[1684]: time="2026-01-27T05:41:52.975916721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:41:53.307226 containerd[1684]: time="2026-01-27T05:41:53.307179826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:53.310195 containerd[1684]: time="2026-01-27T05:41:53.310151127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:41:53.310324 containerd[1684]: time="2026-01-27T05:41:53.310159724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:53.310418 kubelet[2893]: E0127 05:41:53.310376 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:53.310765 kubelet[2893]: E0127 05:41:53.310434 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:41:53.311196 kubelet[2893]: E0127 05:41:53.310829 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:53.311262 containerd[1684]: time="2026-01-27T05:41:53.311003076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:41:53.660653 containerd[1684]: time="2026-01-27T05:41:53.659867175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:53.664255 containerd[1684]: time="2026-01-27T05:41:53.664090117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:41:53.664829 containerd[1684]: time="2026-01-27T05:41:53.664140480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:53.665109 kubelet[2893]: E0127 05:41:53.665063 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:53.665302 kubelet[2893]: E0127 05:41:53.665131 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:41:53.665434 kubelet[2893]: E0127 05:41:53.665378 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:53.667138 kubelet[2893]: E0127 05:41:53.665432 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:41:53.668745 containerd[1684]: time="2026-01-27T05:41:53.668630482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:41:54.022582 containerd[1684]: time="2026-01-27T05:41:54.022142425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:41:54.025114 containerd[1684]: time="2026-01-27T05:41:54.023856663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:41:54.025114 containerd[1684]: time="2026-01-27T05:41:54.023929513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:41:54.025200 kubelet[2893]: E0127 05:41:54.024111 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:54.025200 kubelet[2893]: E0127 05:41:54.025074 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:41:54.025566 kubelet[2893]: E0127 05:41:54.025487 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:41:54.025566 kubelet[2893]: E0127 05:41:54.025532 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:41:55.976070 kubelet[2893]: E0127 05:41:55.975595 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:41:55.977191 kubelet[2893]: E0127 05:41:55.977104 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:41:58.976749 kubelet[2893]: E0127 05:41:58.976712 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:41:58.978803 kubelet[2893]: E0127 05:41:58.977472 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:42:02.975843 kubelet[2893]: E0127 05:42:02.975574 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:42:04.978888 kubelet[2893]: E0127 05:42:04.978772 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:42:05.974897 kubelet[2893]: E0127 05:42:05.974678 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:42:09.978369 kubelet[2893]: E0127 05:42:09.978315 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:42:10.979443 kubelet[2893]: E0127 05:42:10.979402 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:42:11.975981 kubelet[2893]: E0127 05:42:11.975056 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:42:13.975214 kubelet[2893]: E0127 05:42:13.974898 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:42:17.975784 kubelet[2893]: E0127 05:42:17.975328 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:42:18.975473 kubelet[2893]: E0127 05:42:18.975408 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:42:19.975238 kubelet[2893]: E0127 05:42:19.975193 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:42:20.975042 kubelet[2893]: E0127 05:42:20.974961 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:42:22.981027 kubelet[2893]: E0127 05:42:22.980595 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:42:24.976202 containerd[1684]: time="2026-01-27T05:42:24.976063622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:42:25.322832 containerd[1684]: time="2026-01-27T05:42:25.322692706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:25.324486 containerd[1684]: time="2026-01-27T05:42:25.324422151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:42:25.324556 containerd[1684]: time="2026-01-27T05:42:25.324479404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:25.324589 kubelet[2893]: E0127 05:42:25.324560 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:42:25.324828 kubelet[2893]: E0127 05:42:25.324595 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:42:25.324828 kubelet[2893]: E0127 05:42:25.324655 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:25.325926 containerd[1684]: time="2026-01-27T05:42:25.325907978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:42:25.666883 containerd[1684]: time="2026-01-27T05:42:25.666779784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:25.668875 containerd[1684]: time="2026-01-27T05:42:25.668839473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:42:25.669038 containerd[1684]: time="2026-01-27T05:42:25.668926881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:25.669174 kubelet[2893]: E0127 05:42:25.669142 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:42:25.669227 kubelet[2893]: E0127 05:42:25.669185 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:42:25.669270 kubelet[2893]: E0127 05:42:25.669256 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:25.669379 kubelet[2893]: E0127 05:42:25.669294 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:42:25.975182 kubelet[2893]: E0127 05:42:25.975084 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:42:29.855481 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 27 05:42:29.855618 kernel: audit: type=1130 audit(1769492549.853:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.139:22-162.142.125.125:38842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:29.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.139:22-162.142.125.125:38842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:29.854176 systemd[1]: Started sshd@9-10.0.2.139:22-162.142.125.125:38842.service - OpenSSH per-connection server daemon (162.142.125.125:38842). Jan 27 05:42:29.974675 kubelet[2893]: E0127 05:42:29.974375 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:42:30.977281 containerd[1684]: time="2026-01-27T05:42:30.977176366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:42:31.321813 containerd[1684]: time="2026-01-27T05:42:31.321774943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:31.323595 containerd[1684]: time="2026-01-27T05:42:31.323518193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:42:31.323595 containerd[1684]: time="2026-01-27T05:42:31.323553406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:31.324106 kubelet[2893]: E0127 05:42:31.323900 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:42:31.324106 kubelet[2893]: E0127 05:42:31.323944 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:42:31.324106 kubelet[2893]: E0127 05:42:31.324041 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:31.324106 kubelet[2893]: E0127 05:42:31.324074 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:42:32.977783 containerd[1684]: time="2026-01-27T05:42:32.977475989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:42:33.339799 containerd[1684]: time="2026-01-27T05:42:33.339498357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:33.341498 containerd[1684]: time="2026-01-27T05:42:33.341412494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:42:33.341746 containerd[1684]: time="2026-01-27T05:42:33.341684571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:33.342418 kubelet[2893]: E0127 05:42:33.342041 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:42:33.342418 kubelet[2893]: E0127 05:42:33.342178 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:42:33.342418 kubelet[2893]: E0127 05:42:33.342275 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:33.342418 kubelet[2893]: E0127 05:42:33.342308 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:42:33.976536 containerd[1684]: time="2026-01-27T05:42:33.976255573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:42:34.326025 containerd[1684]: time="2026-01-27T05:42:34.325853455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:34.327712 containerd[1684]: time="2026-01-27T05:42:34.327613321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:42:34.327712 containerd[1684]: time="2026-01-27T05:42:34.327686106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:34.327880 kubelet[2893]: E0127 05:42:34.327838 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:42:34.327917 kubelet[2893]: E0127 05:42:34.327891 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:42:34.327973 kubelet[2893]: E0127 05:42:34.327959 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:34.330624 containerd[1684]: time="2026-01-27T05:42:34.330469844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:42:34.660035 containerd[1684]: time="2026-01-27T05:42:34.659905274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:34.662951 containerd[1684]: time="2026-01-27T05:42:34.662896072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:42:34.663041 containerd[1684]: time="2026-01-27T05:42:34.663002863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:34.663404 kubelet[2893]: E0127 05:42:34.663349 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:42:34.663639 kubelet[2893]: E0127 05:42:34.663408 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:42:34.663639 kubelet[2893]: E0127 05:42:34.663498 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:34.663639 kubelet[2893]: E0127 05:42:34.663541 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:42:34.979084 containerd[1684]: time="2026-01-27T05:42:34.978126774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:42:35.319595 containerd[1684]: time="2026-01-27T05:42:35.319548626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:35.321168 containerd[1684]: time="2026-01-27T05:42:35.321130456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:42:35.321227 containerd[1684]: time="2026-01-27T05:42:35.321205140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:35.321577 kubelet[2893]: E0127 05:42:35.321388 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:35.321577 kubelet[2893]: E0127 05:42:35.321434 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:35.321577 kubelet[2893]: E0127 05:42:35.321517 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:35.321577 kubelet[2893]: E0127 05:42:35.321547 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:42:38.976945 containerd[1684]: time="2026-01-27T05:42:38.976741275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:42:38.978163 kubelet[2893]: E0127 05:42:38.978127 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:42:39.321905 containerd[1684]: time="2026-01-27T05:42:39.321804573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:39.323471 containerd[1684]: time="2026-01-27T05:42:39.323437115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:42:39.323552 containerd[1684]: time="2026-01-27T05:42:39.323511635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:39.325120 kubelet[2893]: E0127 05:42:39.325065 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:39.325288 kubelet[2893]: E0127 05:42:39.325218 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:39.325905 kubelet[2893]: E0127 05:42:39.325833 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:39.325905 kubelet[2893]: E0127 05:42:39.325876 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:42:43.975001 containerd[1684]: time="2026-01-27T05:42:43.974968599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:42:44.316034 containerd[1684]: time="2026-01-27T05:42:44.315981120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:42:44.318397 containerd[1684]: time="2026-01-27T05:42:44.318364621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:42:44.318476 containerd[1684]: time="2026-01-27T05:42:44.318439792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:42:44.318630 kubelet[2893]: E0127 05:42:44.318600 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:44.318889 kubelet[2893]: E0127 05:42:44.318651 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:42:44.318889 kubelet[2893]: E0127 05:42:44.318737 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:42:44.318889 kubelet[2893]: E0127 05:42:44.318765 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:42:45.379795 sshd[5097]: Connection closed by 162.142.125.125 port 38842 [preauth] Jan 27 05:42:45.380808 systemd[1]: sshd@9-10.0.2.139:22-162.142.125.125:38842.service: Deactivated successfully. Jan 27 05:42:45.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.139:22-162.142.125.125:38842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:45.385043 kernel: audit: type=1131 audit(1769492565.380:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.139:22-162.142.125.125:38842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:45.975033 kubelet[2893]: E0127 05:42:45.974983 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:42:45.976305 kubelet[2893]: E0127 05:42:45.976265 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:42:47.659208 systemd[1]: Started sshd@10-10.0.2.139:22-4.153.228.146:45644.service - OpenSSH per-connection server daemon (4.153.228.146:45644). Jan 27 05:42:47.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.139:22-4.153.228.146:45644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:47.664435 kernel: audit: type=1130 audit(1769492567.658:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.139:22-4.153.228.146:45644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:47.976547 kubelet[2893]: E0127 05:42:47.976220 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:42:48.202000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.203702 sshd[5155]: Accepted publickey for core from 4.153.228.146 port 45644 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:48.205678 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:48.203000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.211130 kernel: audit: type=1101 audit(1769492568.202:757): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.211182 kernel: audit: type=1103 audit(1769492568.203:758): pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.216485 kernel: audit: type=1006 audit(1769492568.203:759): pid=5155 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 27 05:42:48.218730 systemd-logind[1646]: New session 11 of user core. Jan 27 05:42:48.219202 kernel: audit: type=1300 audit(1769492568.203:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff94c79be0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:48.203000 audit[5155]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff94c79be0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:48.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:48.226328 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 27 05:42:48.228790 kernel: audit: type=1327 audit(1769492568.203:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:48.230000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.234000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.239655 kernel: audit: type=1105 audit(1769492568.230:760): pid=5155 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.239726 kernel: audit: type=1103 audit(1769492568.234:761): pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.622084 sshd[5159]: Connection closed by 4.153.228.146 port 45644 Jan 27 05:42:48.622812 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:48.624000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.633260 kernel: audit: type=1106 audit(1769492568.624:762): pid=5155 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.632865 systemd-logind[1646]: Session 11 logged out. Waiting for processes to exit. Jan 27 05:42:48.624000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:48.634417 systemd[1]: sshd@10-10.0.2.139:22-4.153.228.146:45644.service: Deactivated successfully. Jan 27 05:42:48.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.139:22-4.153.228.146:45644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:48.636498 systemd[1]: session-11.scope: Deactivated successfully. Jan 27 05:42:48.638120 systemd-logind[1646]: Removed session 11. Jan 27 05:42:49.974630 kubelet[2893]: E0127 05:42:49.974596 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:42:50.978097 kubelet[2893]: E0127 05:42:50.978045 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:42:53.183062 update_engine[1651]: I20260127 05:42:53.182513 1651 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 27 05:42:53.183062 update_engine[1651]: I20260127 05:42:53.182560 1651 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 27 05:42:53.183062 update_engine[1651]: I20260127 05:42:53.182749 1651 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 27 05:42:53.184398 update_engine[1651]: I20260127 05:42:53.184348 1651 omaha_request_params.cc:62] Current group set to developer Jan 27 05:42:53.184591 update_engine[1651]: I20260127 05:42:53.184576 1651 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 27 05:42:53.184792 update_engine[1651]: I20260127 05:42:53.184651 1651 update_attempter.cc:643] Scheduling an action processor start. Jan 27 05:42:53.184792 update_engine[1651]: I20260127 05:42:53.184675 1651 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 27 05:42:53.187029 update_engine[1651]: I20260127 05:42:53.186620 1651 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 27 05:42:53.187029 update_engine[1651]: I20260127 05:42:53.186710 1651 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 27 05:42:53.187029 update_engine[1651]: I20260127 05:42:53.186719 1651 omaha_request_action.cc:272] Request: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: Jan 27 05:42:53.187029 update_engine[1651]: I20260127 05:42:53.186726 1651 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:42:53.197276 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 27 05:42:53.198271 update_engine[1651]: I20260127 05:42:53.197592 1651 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:42:53.198271 update_engine[1651]: I20260127 05:42:53.198218 1651 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:42:53.206668 update_engine[1651]: E20260127 05:42:53.206640 1651 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:42:53.206802 update_engine[1651]: I20260127 05:42:53.206789 1651 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 27 05:42:53.731736 systemd[1]: Started sshd@11-10.0.2.139:22-4.153.228.146:45646.service - OpenSSH per-connection server daemon (4.153.228.146:45646). Jan 27 05:42:53.736423 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 27 05:42:53.736489 kernel: audit: type=1130 audit(1769492573.730:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.139:22-4.153.228.146:45646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:53.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.139:22-4.153.228.146:45646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:53.975915 kubelet[2893]: E0127 05:42:53.975877 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:42:54.295000 audit[5174]: USER_ACCT pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.298167 sshd[5174]: Accepted publickey for core from 4.153.228.146 port 45646 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:42:54.301113 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:42:54.298000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.303289 kernel: audit: type=1101 audit(1769492574.295:766): pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.303353 kernel: audit: type=1103 audit(1769492574.298:767): pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.307330 kernel: audit: type=1006 audit(1769492574.299:768): pid=5174 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 27 05:42:54.299000 audit[5174]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff74e465d0 a2=3 a3=0 items=0 ppid=1 pid=5174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:54.299000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:54.315303 kernel: audit: type=1300 audit(1769492574.299:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff74e465d0 a2=3 a3=0 items=0 ppid=1 pid=5174 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:42:54.315351 kernel: audit: type=1327 audit(1769492574.299:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:42:54.317929 systemd-logind[1646]: New session 12 of user core. Jan 27 05:42:54.324214 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 27 05:42:54.328000 audit[5174]: USER_START pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.336059 kernel: audit: type=1105 audit(1769492574.328:769): pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.331000 audit[5178]: CRED_ACQ pid=5178 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.342032 kernel: audit: type=1103 audit(1769492574.331:770): pid=5178 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.665245 sshd[5178]: Connection closed by 4.153.228.146 port 45646 Jan 27 05:42:54.664539 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Jan 27 05:42:54.665000 audit[5174]: USER_END pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.672434 systemd[1]: sshd@11-10.0.2.139:22-4.153.228.146:45646.service: Deactivated successfully. Jan 27 05:42:54.674056 systemd-logind[1646]: Session 12 logged out. Waiting for processes to exit. Jan 27 05:42:54.675546 systemd[1]: session-12.scope: Deactivated successfully. Jan 27 05:42:54.677044 kernel: audit: type=1106 audit(1769492574.665:771): pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.665000 audit[5174]: CRED_DISP pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.680748 systemd-logind[1646]: Removed session 12. Jan 27 05:42:54.684036 kernel: audit: type=1104 audit(1769492574.665:772): pid=5174 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:42:54.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.139:22-4.153.228.146:45646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:57.977038 kubelet[2893]: E0127 05:42:57.976732 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:42:57.978486 kubelet[2893]: E0127 05:42:57.978149 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:42:58.975589 kubelet[2893]: E0127 05:42:58.975385 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:42:59.781685 systemd[1]: Started sshd@12-10.0.2.139:22-4.153.228.146:49972.service - OpenSSH per-connection server daemon (4.153.228.146:49972). Jan 27 05:42:59.790284 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:42:59.790322 kernel: audit: type=1130 audit(1769492579.781:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.2.139:22-4.153.228.146:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:59.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.2.139:22-4.153.228.146:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:42:59.975480 kubelet[2893]: E0127 05:42:59.975215 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:43:00.315000 audit[5191]: USER_ACCT pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.318175 sshd[5191]: Accepted publickey for core from 4.153.228.146 port 49972 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:00.320310 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:00.322038 kernel: audit: type=1101 audit(1769492580.315:775): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.318000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.329105 kernel: audit: type=1103 audit(1769492580.318:776): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.329175 kernel: audit: type=1006 audit(1769492580.318:777): pid=5191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 27 05:43:00.318000 audit[5191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5a245b70 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:00.336025 kernel: audit: type=1300 audit(1769492580.318:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5a245b70 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:00.318000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:00.336282 systemd-logind[1646]: New session 13 of user core. Jan 27 05:43:00.339023 kernel: audit: type=1327 audit(1769492580.318:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:00.348183 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 27 05:43:00.349000 audit[5191]: USER_START pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.357035 kernel: audit: type=1105 audit(1769492580.349:778): pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.351000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.362069 kernel: audit: type=1103 audit(1769492580.351:779): pid=5195 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.683910 sshd[5195]: Connection closed by 4.153.228.146 port 49972 Jan 27 05:43:00.684581 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:00.685000 audit[5191]: USER_END pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.690476 systemd[1]: sshd@12-10.0.2.139:22-4.153.228.146:49972.service: Deactivated successfully. Jan 27 05:43:00.686000 audit[5191]: CRED_DISP pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.695951 systemd[1]: session-13.scope: Deactivated successfully. Jan 27 05:43:00.696549 kernel: audit: type=1106 audit(1769492580.685:780): pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.696602 kernel: audit: type=1104 audit(1769492580.686:781): pid=5191 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:00.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.2.139:22-4.153.228.146:49972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:00.702704 systemd-logind[1646]: Session 13 logged out. Waiting for processes to exit. Jan 27 05:43:00.703573 systemd-logind[1646]: Removed session 13. Jan 27 05:43:00.791348 systemd[1]: Started sshd@13-10.0.2.139:22-4.153.228.146:49980.service - OpenSSH per-connection server daemon (4.153.228.146:49980). Jan 27 05:43:00.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.2.139:22-4.153.228.146:49980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:00.976560 kubelet[2893]: E0127 05:43:00.975694 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:43:01.324000 audit[5207]: USER_ACCT pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.326689 sshd[5207]: Accepted publickey for core from 4.153.228.146 port 49980 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:01.326000 audit[5207]: CRED_ACQ pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.326000 audit[5207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadcfd2f0 a2=3 a3=0 items=0 ppid=1 pid=5207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:01.326000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:01.330074 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:01.340067 systemd-logind[1646]: New session 14 of user core. Jan 27 05:43:01.344384 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 27 05:43:01.347000 audit[5207]: USER_START pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.349000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.729215 sshd[5213]: Connection closed by 4.153.228.146 port 49980 Jan 27 05:43:01.731567 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:01.732000 audit[5207]: USER_END pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.732000 audit[5207]: CRED_DISP pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:01.735966 systemd-logind[1646]: Session 14 logged out. Waiting for processes to exit. Jan 27 05:43:01.737031 systemd[1]: sshd@13-10.0.2.139:22-4.153.228.146:49980.service: Deactivated successfully. Jan 27 05:43:01.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.2.139:22-4.153.228.146:49980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:01.739295 systemd[1]: session-14.scope: Deactivated successfully. Jan 27 05:43:01.741524 systemd-logind[1646]: Removed session 14. Jan 27 05:43:01.838244 systemd[1]: Started sshd@14-10.0.2.139:22-4.153.228.146:49984.service - OpenSSH per-connection server daemon (4.153.228.146:49984). Jan 27 05:43:01.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.2.139:22-4.153.228.146:49984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:02.352000 audit[5227]: USER_ACCT pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.355090 sshd[5227]: Accepted publickey for core from 4.153.228.146 port 49984 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:02.354000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.354000 audit[5227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeef729140 a2=3 a3=0 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:02.354000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:02.356704 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:02.364875 systemd-logind[1646]: New session 15 of user core. Jan 27 05:43:02.369197 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 27 05:43:02.371000 audit[5227]: USER_START pid=5227 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.373000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.708490 sshd[5257]: Connection closed by 4.153.228.146 port 49984 Jan 27 05:43:02.710168 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:02.710000 audit[5227]: USER_END pid=5227 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.710000 audit[5227]: CRED_DISP pid=5227 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:02.713700 systemd[1]: sshd@14-10.0.2.139:22-4.153.228.146:49984.service: Deactivated successfully. Jan 27 05:43:02.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.2.139:22-4.153.228.146:49984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:02.716401 systemd[1]: session-15.scope: Deactivated successfully. Jan 27 05:43:02.717975 systemd-logind[1646]: Session 15 logged out. Waiting for processes to exit. Jan 27 05:43:02.719117 systemd-logind[1646]: Removed session 15. Jan 27 05:43:03.180157 update_engine[1651]: I20260127 05:43:03.180096 1651 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:43:03.180484 update_engine[1651]: I20260127 05:43:03.180176 1651 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:43:03.180510 update_engine[1651]: I20260127 05:43:03.180478 1651 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:43:03.188263 update_engine[1651]: E20260127 05:43:03.188215 1651 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:43:03.188372 update_engine[1651]: I20260127 05:43:03.188300 1651 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 27 05:43:04.976269 kubelet[2893]: E0127 05:43:04.976033 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:43:05.976608 kubelet[2893]: E0127 05:43:05.976556 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:43:07.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.2.139:22-4.153.228.146:58300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:07.825769 systemd[1]: Started sshd@15-10.0.2.139:22-4.153.228.146:58300.service - OpenSSH per-connection server daemon (4.153.228.146:58300). Jan 27 05:43:07.827580 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 27 05:43:07.827637 kernel: audit: type=1130 audit(1769492587.825:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.2.139:22-4.153.228.146:58300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:08.392000 audit[5270]: USER_ACCT pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.395703 sshd[5270]: Accepted publickey for core from 4.153.228.146 port 58300 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:08.398033 kernel: audit: type=1101 audit(1769492588.392:802): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.396900 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:08.394000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.404139 kernel: audit: type=1103 audit(1769492588.394:803): pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.404524 kernel: audit: type=1006 audit(1769492588.394:804): pid=5270 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 27 05:43:08.394000 audit[5270]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc662bcbc0 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:08.411100 kernel: audit: type=1300 audit(1769492588.394:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc662bcbc0 a2=3 a3=0 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:08.394000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:08.414122 kernel: audit: type=1327 audit(1769492588.394:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:08.418566 systemd-logind[1646]: New session 16 of user core. Jan 27 05:43:08.425678 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 27 05:43:08.428000 audit[5270]: USER_START pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.439678 kernel: audit: type=1105 audit(1769492588.428:805): pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.439765 kernel: audit: type=1103 audit(1769492588.434:806): pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.434000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.784105 sshd[5274]: Connection closed by 4.153.228.146 port 58300 Jan 27 05:43:08.785888 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:08.793377 kernel: audit: type=1106 audit(1769492588.786:807): pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.786000 audit[5270]: USER_END pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.789418 systemd[1]: sshd@15-10.0.2.139:22-4.153.228.146:58300.service: Deactivated successfully. Jan 27 05:43:08.791143 systemd[1]: session-16.scope: Deactivated successfully. Jan 27 05:43:08.793829 systemd-logind[1646]: Session 16 logged out. Waiting for processes to exit. Jan 27 05:43:08.786000 audit[5270]: CRED_DISP pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.2.139:22-4.153.228.146:58300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:08.799309 kernel: audit: type=1104 audit(1769492588.786:808): pid=5270 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:08.798917 systemd-logind[1646]: Removed session 16. Jan 27 05:43:08.898309 systemd[1]: Started sshd@16-10.0.2.139:22-4.153.228.146:58302.service - OpenSSH per-connection server daemon (4.153.228.146:58302). Jan 27 05:43:08.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.2.139:22-4.153.228.146:58302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:08.979204 kubelet[2893]: E0127 05:43:08.979160 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:43:09.449000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:09.450967 sshd[5286]: Accepted publickey for core from 4.153.228.146 port 58302 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:09.451000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:09.451000 audit[5286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe895f9470 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:09.451000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:09.453056 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:09.460787 systemd-logind[1646]: New session 17 of user core. Jan 27 05:43:09.467576 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 27 05:43:09.471000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:09.474000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:09.974607 kubelet[2893]: E0127 05:43:09.974576 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:43:10.162708 sshd[5290]: Connection closed by 4.153.228.146 port 58302 Jan 27 05:43:10.163583 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:10.166000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:10.166000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:10.169882 systemd[1]: sshd@16-10.0.2.139:22-4.153.228.146:58302.service: Deactivated successfully. Jan 27 05:43:10.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.2.139:22-4.153.228.146:58302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:10.172875 systemd[1]: session-17.scope: Deactivated successfully. Jan 27 05:43:10.175240 systemd-logind[1646]: Session 17 logged out. Waiting for processes to exit. Jan 27 05:43:10.176570 systemd-logind[1646]: Removed session 17. Jan 27 05:43:10.270260 systemd[1]: Started sshd@17-10.0.2.139:22-4.153.228.146:58308.service - OpenSSH per-connection server daemon (4.153.228.146:58308). Jan 27 05:43:10.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.2.139:22-4.153.228.146:58308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:10.795000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:10.797474 sshd[5301]: Accepted publickey for core from 4.153.228.146 port 58308 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:10.796000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:10.796000 audit[5301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc90c461b0 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:10.796000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:10.798638 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:10.803290 systemd-logind[1646]: New session 18 of user core. Jan 27 05:43:10.809238 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 27 05:43:10.810000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:10.812000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:11.782000 audit[5315]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:11.782000 audit[5315]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdf231ae80 a2=0 a3=7ffdf231ae6c items=0 ppid=3018 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:11.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:11.786000 audit[5315]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:11.786000 audit[5315]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf231ae80 a2=0 a3=0 items=0 ppid=3018 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:11.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:11.882296 sshd[5305]: Connection closed by 4.153.228.146 port 58308 Jan 27 05:43:11.883089 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:11.884000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:11.884000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:11.888454 systemd[1]: sshd@17-10.0.2.139:22-4.153.228.146:58308.service: Deactivated successfully. Jan 27 05:43:11.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.2.139:22-4.153.228.146:58308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:11.890726 systemd[1]: session-18.scope: Deactivated successfully. Jan 27 05:43:11.893158 systemd-logind[1646]: Session 18 logged out. Waiting for processes to exit. Jan 27 05:43:11.895163 systemd-logind[1646]: Removed session 18. Jan 27 05:43:11.994355 systemd[1]: Started sshd@18-10.0.2.139:22-4.153.228.146:58322.service - OpenSSH per-connection server daemon (4.153.228.146:58322). Jan 27 05:43:11.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.139:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:12.550000 audit[5320]: USER_ACCT pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:12.551536 sshd[5320]: Accepted publickey for core from 4.153.228.146 port 58322 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:12.551000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:12.551000 audit[5320]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff96457670 a2=3 a3=0 items=0 ppid=1 pid=5320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:12.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:12.553191 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:12.558232 systemd-logind[1646]: New session 19 of user core. Jan 27 05:43:12.568217 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 27 05:43:12.570000 audit[5320]: USER_START pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:12.572000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:12.818000 audit[5331]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:12.818000 audit[5331]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee9456080 a2=0 a3=7ffee945606c items=0 ppid=3018 pid=5331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:12.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:12.823000 audit[5331]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:12.823000 audit[5331]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee9456080 a2=0 a3=0 items=0 ppid=3018 pid=5331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:12.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:12.975461 kubelet[2893]: E0127 05:43:12.974666 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:43:13.043004 sshd[5324]: Connection closed by 4.153.228.146 port 58322 Jan 27 05:43:13.044814 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:13.055645 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 27 05:43:13.055739 kernel: audit: type=1106 audit(1769492593.048:838): pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.048000 audit[5320]: USER_END pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.057925 systemd[1]: sshd@18-10.0.2.139:22-4.153.228.146:58322.service: Deactivated successfully. Jan 27 05:43:13.060904 systemd[1]: session-19.scope: Deactivated successfully. Jan 27 05:43:13.063925 systemd-logind[1646]: Session 19 logged out. Waiting for processes to exit. Jan 27 05:43:13.070037 kernel: audit: type=1104 audit(1769492593.055:839): pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.055000 audit[5320]: CRED_DISP pid=5320 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.067456 systemd-logind[1646]: Removed session 19. Jan 27 05:43:13.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.139:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:13.075095 kernel: audit: type=1131 audit(1769492593.057:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.139:22-4.153.228.146:58322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:13.153371 systemd[1]: Started sshd@19-10.0.2.139:22-4.153.228.146:58330.service - OpenSSH per-connection server daemon (4.153.228.146:58330). Jan 27 05:43:13.158423 kernel: audit: type=1130 audit(1769492593.152:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.139:22-4.153.228.146:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:13.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.139:22-4.153.228.146:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:13.180156 update_engine[1651]: I20260127 05:43:13.180042 1651 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:43:13.180156 update_engine[1651]: I20260127 05:43:13.180114 1651 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:43:13.180480 update_engine[1651]: I20260127 05:43:13.180390 1651 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:43:13.186866 update_engine[1651]: E20260127 05:43:13.186220 1651 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:43:13.186866 update_engine[1651]: I20260127 05:43:13.186291 1651 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 27 05:43:13.744210 kernel: audit: type=1101 audit(1769492593.733:842): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.733000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.744499 sshd[5336]: Accepted publickey for core from 4.153.228.146 port 58330 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:13.741000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.747213 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:13.752051 kernel: audit: type=1103 audit(1769492593.741:843): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.741000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c261de0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:13.762278 kernel: audit: type=1006 audit(1769492593.741:844): pid=5336 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 27 05:43:13.762349 kernel: audit: type=1300 audit(1769492593.741:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c261de0 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:13.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:13.770062 kernel: audit: type=1327 audit(1769492593.741:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:13.774136 systemd-logind[1646]: New session 20 of user core. Jan 27 05:43:13.777254 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 27 05:43:13.781000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.783000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.790093 kernel: audit: type=1105 audit(1769492593.781:845): pid=5336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:13.976461 kubelet[2893]: E0127 05:43:13.976182 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:43:13.976461 kubelet[2893]: E0127 05:43:13.976214 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:43:14.096366 sshd[5340]: Connection closed by 4.153.228.146 port 58330 Jan 27 05:43:14.096228 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:14.097000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:14.097000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:14.100514 systemd[1]: sshd@19-10.0.2.139:22-4.153.228.146:58330.service: Deactivated successfully. Jan 27 05:43:14.100947 systemd-logind[1646]: Session 20 logged out. Waiting for processes to exit. Jan 27 05:43:14.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.139:22-4.153.228.146:58330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:14.102684 systemd[1]: session-20.scope: Deactivated successfully. Jan 27 05:43:14.105717 systemd-logind[1646]: Removed session 20. Jan 27 05:43:16.169000 audit[5351]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:16.169000 audit[5351]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc00ef5c0 a2=0 a3=7ffcc00ef5ac items=0 ppid=3018 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:16.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:16.176000 audit[5351]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 05:43:16.176000 audit[5351]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcc00ef5c0 a2=0 a3=7ffcc00ef5ac items=0 ppid=3018 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:16.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 05:43:19.220834 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 27 05:43:19.220937 kernel: audit: type=1130 audit(1769492599.216:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.139:22-4.153.228.146:53852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:19.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.139:22-4.153.228.146:53852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:19.217268 systemd[1]: Started sshd@20-10.0.2.139:22-4.153.228.146:53852.service - OpenSSH per-connection server daemon (4.153.228.146:53852). Jan 27 05:43:19.759000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.761692 sshd[5354]: Accepted publickey for core from 4.153.228.146 port 53852 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:19.762970 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:19.767370 systemd-logind[1646]: New session 21 of user core. Jan 27 05:43:19.768267 kernel: audit: type=1101 audit(1769492599.759:853): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.761000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.774099 kernel: audit: type=1103 audit(1769492599.761:854): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.775376 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 27 05:43:19.779052 kernel: audit: type=1006 audit(1769492599.761:855): pid=5354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 27 05:43:19.761000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcfac01a0 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:19.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:19.786432 kernel: audit: type=1300 audit(1769492599.761:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcfac01a0 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:19.787286 kernel: audit: type=1327 audit(1769492599.761:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:19.779000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.790055 kernel: audit: type=1105 audit(1769492599.779:856): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.785000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.795244 kernel: audit: type=1103 audit(1769492599.785:857): pid=5358 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:19.975335 kubelet[2893]: E0127 05:43:19.975252 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:43:20.136100 sshd[5358]: Connection closed by 4.153.228.146 port 53852 Jan 27 05:43:20.136520 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:20.136000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:20.140970 systemd[1]: sshd@20-10.0.2.139:22-4.153.228.146:53852.service: Deactivated successfully. Jan 27 05:43:20.137000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:20.143823 systemd[1]: session-21.scope: Deactivated successfully. Jan 27 05:43:20.145097 kernel: audit: type=1106 audit(1769492600.136:858): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:20.145151 kernel: audit: type=1104 audit(1769492600.137:859): pid=5354 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:20.147186 systemd-logind[1646]: Session 21 logged out. Waiting for processes to exit. Jan 27 05:43:20.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.139:22-4.153.228.146:53852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:20.149822 systemd-logind[1646]: Removed session 21. Jan 27 05:43:20.975405 kubelet[2893]: E0127 05:43:20.975359 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:43:20.978177 kubelet[2893]: E0127 05:43:20.978117 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:43:22.975488 kubelet[2893]: E0127 05:43:22.975301 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:43:23.183113 update_engine[1651]: I20260127 05:43:23.183053 1651 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:43:23.183416 update_engine[1651]: I20260127 05:43:23.183143 1651 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:43:23.184259 update_engine[1651]: I20260127 05:43:23.184212 1651 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:43:23.191177 update_engine[1651]: E20260127 05:43:23.191141 1651 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:43:23.191344 update_engine[1651]: I20260127 05:43:23.191320 1651 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191375 1651 omaha_request_action.cc:617] Omaha request response: Jan 27 05:43:23.191888 update_engine[1651]: E20260127 05:43:23.191443 1651 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191461 1651 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191466 1651 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191473 1651 update_attempter.cc:306] Processing Done. Jan 27 05:43:23.191888 update_engine[1651]: E20260127 05:43:23.191486 1651 update_attempter.cc:619] Update failed. Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191493 1651 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191497 1651 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191501 1651 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191560 1651 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191579 1651 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 27 05:43:23.191888 update_engine[1651]: I20260127 05:43:23.191584 1651 omaha_request_action.cc:272] Request: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.191888 update_engine[1651]: Jan 27 05:43:23.192252 update_engine[1651]: I20260127 05:43:23.191591 1651 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 27 05:43:23.192252 update_engine[1651]: I20260127 05:43:23.191609 1651 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 27 05:43:23.192252 update_engine[1651]: I20260127 05:43:23.191854 1651 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 27 05:43:23.192587 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 27 05:43:23.198094 update_engine[1651]: E20260127 05:43:23.198066 1651 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 27 05:43:23.198216 update_engine[1651]: I20260127 05:43:23.198202 1651 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 27 05:43:23.198263 update_engine[1651]: I20260127 05:43:23.198253 1651 omaha_request_action.cc:617] Omaha request response: Jan 27 05:43:23.198295 update_engine[1651]: I20260127 05:43:23.198287 1651 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:43:23.198327 update_engine[1651]: I20260127 05:43:23.198319 1651 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 27 05:43:23.198517 update_engine[1651]: I20260127 05:43:23.198350 1651 update_attempter.cc:306] Processing Done. Jan 27 05:43:23.198517 update_engine[1651]: I20260127 05:43:23.198357 1651 update_attempter.cc:310] Error event sent. Jan 27 05:43:23.198517 update_engine[1651]: I20260127 05:43:23.198364 1651 update_check_scheduler.cc:74] Next update check in 44m51s Jan 27 05:43:23.198715 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 27 05:43:25.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.139:22-4.153.228.146:43192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:25.242220 systemd[1]: Started sshd@21-10.0.2.139:22-4.153.228.146:43192.service - OpenSSH per-connection server daemon (4.153.228.146:43192). Jan 27 05:43:25.243221 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:43:25.243263 kernel: audit: type=1130 audit(1769492605.241:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.139:22-4.153.228.146:43192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:25.764000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.765900 sshd[5372]: Accepted publickey for core from 4.153.228.146 port 43192 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:25.770031 kernel: audit: type=1101 audit(1769492605.764:862): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.770850 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:25.768000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.776072 kernel: audit: type=1103 audit(1769492605.768:863): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.776127 kernel: audit: type=1006 audit(1769492605.769:864): pid=5372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 27 05:43:25.769000 audit[5372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3ff6d940 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:25.779644 kernel: audit: type=1300 audit(1769492605.769:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3ff6d940 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:25.769000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:25.784031 kernel: audit: type=1327 audit(1769492605.769:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:25.788469 systemd-logind[1646]: New session 22 of user core. Jan 27 05:43:25.796231 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 27 05:43:25.798000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.805094 kernel: audit: type=1105 audit(1769492605.798:865): pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.807000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:25.812043 kernel: audit: type=1103 audit(1769492605.807:866): pid=5376 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:26.125482 sshd[5376]: Connection closed by 4.153.228.146 port 43192 Jan 27 05:43:26.126584 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:26.128000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:26.135029 kernel: audit: type=1106 audit(1769492606.128:867): pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:26.135320 systemd[1]: sshd@21-10.0.2.139:22-4.153.228.146:43192.service: Deactivated successfully. Jan 27 05:43:26.131000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:26.139641 systemd[1]: session-22.scope: Deactivated successfully. Jan 27 05:43:26.141033 kernel: audit: type=1104 audit(1769492606.131:868): pid=5372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:26.141137 systemd-logind[1646]: Session 22 logged out. Waiting for processes to exit. Jan 27 05:43:26.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.139:22-4.153.228.146:43192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:26.143807 systemd-logind[1646]: Removed session 22. Jan 27 05:43:27.976595 kubelet[2893]: E0127 05:43:27.976528 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:43:28.976629 kubelet[2893]: E0127 05:43:28.975996 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:43:28.977157 kubelet[2893]: E0127 05:43:28.977137 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:43:30.975739 kubelet[2893]: E0127 05:43:30.975703 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:43:31.244033 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:43:31.244127 kernel: audit: type=1130 audit(1769492611.241:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.139:22-4.153.228.146:43202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:31.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.139:22-4.153.228.146:43202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:31.242327 systemd[1]: Started sshd@22-10.0.2.139:22-4.153.228.146:43202.service - OpenSSH per-connection server daemon (4.153.228.146:43202). Jan 27 05:43:31.786000 audit[5389]: USER_ACCT pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.788512 sshd[5389]: Accepted publickey for core from 4.153.228.146 port 43202 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:31.789683 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:31.786000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.794287 kernel: audit: type=1101 audit(1769492611.786:871): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.794348 kernel: audit: type=1103 audit(1769492611.786:872): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.796265 systemd-logind[1646]: New session 23 of user core. Jan 27 05:43:31.798616 kernel: audit: type=1006 audit(1769492611.786:873): pid=5389 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 27 05:43:31.799161 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 27 05:43:31.786000 audit[5389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5fe0b4a0 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:31.802217 kernel: audit: type=1300 audit(1769492611.786:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5fe0b4a0 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:31.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:31.802000 audit[5389]: USER_START pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.809416 kernel: audit: type=1327 audit(1769492611.786:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:31.809467 kernel: audit: type=1105 audit(1769492611.802:874): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.805000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:31.814080 kernel: audit: type=1103 audit(1769492611.805:875): pid=5393 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:32.182423 sshd[5393]: Connection closed by 4.153.228.146 port 43202 Jan 27 05:43:32.184719 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:32.196417 kernel: audit: type=1106 audit(1769492612.187:876): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:32.187000 audit[5389]: USER_END pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:32.197260 systemd[1]: sshd@22-10.0.2.139:22-4.153.228.146:43202.service: Deactivated successfully. Jan 27 05:43:32.199962 systemd[1]: session-23.scope: Deactivated successfully. Jan 27 05:43:32.187000 audit[5389]: CRED_DISP pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:32.206836 kernel: audit: type=1104 audit(1769492612.187:877): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:32.207162 systemd-logind[1646]: Session 23 logged out. Waiting for processes to exit. Jan 27 05:43:32.209686 systemd-logind[1646]: Removed session 23. Jan 27 05:43:32.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.139:22-4.153.228.146:43202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:32.977438 kubelet[2893]: E0127 05:43:32.977381 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:43:35.976559 kubelet[2893]: E0127 05:43:35.976452 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:43:37.294279 systemd[1]: Started sshd@23-10.0.2.139:22-4.153.228.146:55262.service - OpenSSH per-connection server daemon (4.153.228.146:55262). Jan 27 05:43:37.300256 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:43:37.300406 kernel: audit: type=1130 audit(1769492617.293:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.2.139:22-4.153.228.146:55262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:37.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.2.139:22-4.153.228.146:55262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:37.841000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.845409 sshd[5429]: Accepted publickey for core from 4.153.228.146 port 55262 ssh2: RSA SHA256:NcHlXFYi38OyIGsNDAChhkfBbXxwcr6UHvOuCQE3OYc Jan 27 05:43:37.847465 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 05:43:37.845000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.848923 kernel: audit: type=1101 audit(1769492617.841:880): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.849002 kernel: audit: type=1103 audit(1769492617.845:881): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.852787 kernel: audit: type=1006 audit(1769492617.845:882): pid=5429 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 27 05:43:37.845000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea7e47fa0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:37.856300 kernel: audit: type=1300 audit(1769492617.845:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea7e47fa0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:43:37.859203 kernel: audit: type=1327 audit(1769492617.845:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:37.845000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 05:43:37.862584 systemd-logind[1646]: New session 24 of user core. Jan 27 05:43:37.868246 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 27 05:43:37.880084 kernel: audit: type=1105 audit(1769492617.872:883): pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.872000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.885117 kernel: audit: type=1103 audit(1769492617.879:884): pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.879000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:37.976446 kubelet[2893]: E0127 05:43:37.976389 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:43:38.202085 sshd[5433]: Connection closed by 4.153.228.146 port 55262 Jan 27 05:43:38.203798 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jan 27 05:43:38.203000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:38.209330 systemd[1]: sshd@23-10.0.2.139:22-4.153.228.146:55262.service: Deactivated successfully. Jan 27 05:43:38.203000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:38.211468 systemd[1]: session-24.scope: Deactivated successfully. Jan 27 05:43:38.211864 kernel: audit: type=1106 audit(1769492618.203:885): pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:38.211911 kernel: audit: type=1104 audit(1769492618.203:886): pid=5429 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 27 05:43:38.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.2.139:22-4.153.228.146:55262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 05:43:38.215040 systemd-logind[1646]: Session 24 logged out. Waiting for processes to exit. Jan 27 05:43:38.215920 systemd-logind[1646]: Removed session 24. Jan 27 05:43:39.974899 kubelet[2893]: E0127 05:43:39.974862 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:43:42.978386 kubelet[2893]: E0127 05:43:42.978342 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:43:43.976211 kubelet[2893]: E0127 05:43:43.975090 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:43:43.976710 kubelet[2893]: E0127 05:43:43.976671 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:43:47.975569 kubelet[2893]: E0127 05:43:47.975160 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:43:48.975311 containerd[1684]: time="2026-01-27T05:43:48.975209134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 05:43:49.313896 containerd[1684]: time="2026-01-27T05:43:49.313857657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:49.315959 containerd[1684]: time="2026-01-27T05:43:49.315879595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 05:43:49.315959 containerd[1684]: time="2026-01-27T05:43:49.315932251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:49.316156 kubelet[2893]: E0127 05:43:49.316125 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:43:49.316410 kubelet[2893]: E0127 05:43:49.316166 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 05:43:49.316410 kubelet[2893]: E0127 05:43:49.316228 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:49.317191 containerd[1684]: time="2026-01-27T05:43:49.317027669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 05:43:49.662222 containerd[1684]: time="2026-01-27T05:43:49.661845805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:49.664050 containerd[1684]: time="2026-01-27T05:43:49.663930173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 05:43:49.664270 containerd[1684]: time="2026-01-27T05:43:49.664193325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:49.664418 kubelet[2893]: E0127 05:43:49.664379 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:43:49.664493 kubelet[2893]: E0127 05:43:49.664425 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 05:43:49.664619 kubelet[2893]: E0127 05:43:49.664498 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66dfc449f6-njgj9_calico-system(f422b70f-feda-43c7-ab21-bd446de0a9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:49.664619 kubelet[2893]: E0127 05:43:49.664530 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:43:51.974409 kubelet[2893]: E0127 05:43:51.974360 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:43:52.975204 containerd[1684]: time="2026-01-27T05:43:52.975163859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 05:43:53.289838 containerd[1684]: time="2026-01-27T05:43:53.289775946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:53.291741 containerd[1684]: time="2026-01-27T05:43:53.291691777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 05:43:53.292347 containerd[1684]: time="2026-01-27T05:43:53.291776126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:53.292416 kubelet[2893]: E0127 05:43:53.291958 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:43:53.292416 kubelet[2893]: E0127 05:43:53.292044 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 05:43:53.292416 kubelet[2893]: E0127 05:43:53.292140 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4t22j_calico-system(e8248550-dadc-499c-aab6-b47350ead3d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:53.292416 kubelet[2893]: E0127 05:43:53.292190 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:43:54.975929 kubelet[2893]: E0127 05:43:54.975813 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:43:54.977489 containerd[1684]: time="2026-01-27T05:43:54.975867394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 05:43:55.330690 containerd[1684]: time="2026-01-27T05:43:55.330534170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:43:55.332410 containerd[1684]: time="2026-01-27T05:43:55.332327639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 05:43:55.332773 containerd[1684]: time="2026-01-27T05:43:55.332380985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 05:43:55.332823 kubelet[2893]: E0127 05:43:55.332605 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:43:55.332823 kubelet[2893]: E0127 05:43:55.332644 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 05:43:55.332823 kubelet[2893]: E0127 05:43:55.332715 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65f4f5cc45-xxl57_calico-system(a4123b97-2d89-42ef-9011-27c5f71176fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 05:43:55.332823 kubelet[2893]: E0127 05:43:55.332741 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:43:57.975736 kubelet[2893]: E0127 05:43:57.975656 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:43:59.975656 kubelet[2893]: E0127 05:43:59.975591 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:44:01.975613 kubelet[2893]: E0127 05:44:01.975554 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66dfc449f6-njgj9" podUID="f422b70f-feda-43c7-ab21-bd446de0a9bb" Jan 27 05:44:02.932817 systemd[1]: cri-containerd-428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc.scope: Deactivated successfully. Jan 27 05:44:02.933242 systemd[1]: cri-containerd-428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc.scope: Consumed 3.386s CPU time, 65M memory peak. Jan 27 05:44:02.938847 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 05:44:02.938978 kernel: audit: type=1334 audit(1769492642.933:888): prog-id=261 op=LOAD Jan 27 05:44:02.939029 kernel: audit: type=1334 audit(1769492642.933:889): prog-id=83 op=UNLOAD Jan 27 05:44:02.933000 audit: BPF prog-id=261 op=LOAD Jan 27 05:44:02.933000 audit: BPF prog-id=83 op=UNLOAD Jan 27 05:44:02.940000 audit: BPF prog-id=98 op=UNLOAD Jan 27 05:44:02.942911 containerd[1684]: time="2026-01-27T05:44:02.942873836Z" level=info msg="received container exit event container_id:\"428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc\" id:\"428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc\" pid:2704 exit_status:1 exited_at:{seconds:1769492642 nanos:941643077}" Jan 27 05:44:02.943452 kernel: audit: type=1334 audit(1769492642.940:890): prog-id=98 op=UNLOAD Jan 27 05:44:02.943488 kernel: audit: type=1334 audit(1769492642.940:891): prog-id=102 op=UNLOAD Jan 27 05:44:02.940000 audit: BPF prog-id=102 op=UNLOAD Jan 27 05:44:02.971257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc-rootfs.mount: Deactivated successfully. Jan 27 05:44:03.357790 kubelet[2893]: E0127 05:44:03.357723 2893 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.139:33712->10.0.2.197:2379: read: connection timed out" Jan 27 05:44:03.568051 kubelet[2893]: I0127 05:44:03.567985 2893 scope.go:117] "RemoveContainer" containerID="428908a93dc3ee226577d8ffe1f6af8982485ad3307fa08452bdc2fe5d341bfc" Jan 27 05:44:03.586287 containerd[1684]: time="2026-01-27T05:44:03.586240411Z" level=info msg="CreateContainer within sandbox \"72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 27 05:44:03.603446 containerd[1684]: time="2026-01-27T05:44:03.601692613Z" level=info msg="Container 8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:44:03.609832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407707057.mount: Deactivated successfully. Jan 27 05:44:03.618660 containerd[1684]: time="2026-01-27T05:44:03.618630948Z" level=info msg="CreateContainer within sandbox \"72d0005fc3c3d7dd98d7e51030f7f2293b54303d06c8df6a8d2b6c0359a348a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5\"" Jan 27 05:44:03.619399 containerd[1684]: time="2026-01-27T05:44:03.619375261Z" level=info msg="StartContainer for \"8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5\"" Jan 27 05:44:03.620702 containerd[1684]: time="2026-01-27T05:44:03.620647200Z" level=info msg="connecting to shim 8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5" address="unix:///run/containerd/s/00b760e8df235cdc39cef2e39402448d404b0f8d31e49215b471f19842bb666e" protocol=ttrpc version=3 Jan 27 05:44:03.645206 systemd[1]: Started cri-containerd-8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5.scope - libcontainer container 8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5. Jan 27 05:44:03.657000 audit: BPF prog-id=262 op=LOAD Jan 27 05:44:03.657000 audit: BPF prog-id=263 op=LOAD Jan 27 05:44:03.661502 kernel: audit: type=1334 audit(1769492643.657:892): prog-id=262 op=LOAD Jan 27 05:44:03.661568 kernel: audit: type=1334 audit(1769492643.657:893): prog-id=263 op=LOAD Jan 27 05:44:03.657000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.670991 kernel: audit: type=1300 audit(1769492643.657:893): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.671054 kernel: audit: type=1327 audit(1769492643.657:893): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=263 op=UNLOAD Jan 27 05:44:03.674695 kernel: audit: type=1334 audit(1769492643.658:894): prog-id=263 op=UNLOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.677551 kernel: audit: type=1300 audit(1769492643.658:894): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=264 op=LOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=265 op=LOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=265 op=UNLOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=264 op=UNLOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.658000 audit: BPF prog-id=266 op=LOAD Jan 27 05:44:03.658000 audit[5492]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2560 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:03.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864356632633234313365336231373235626339333934666162633066 Jan 27 05:44:03.684739 systemd[1]: cri-containerd-a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872.scope: Deactivated successfully. Jan 27 05:44:03.685020 systemd[1]: cri-containerd-a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872.scope: Consumed 23.322s CPU time, 121.1M memory peak. Jan 27 05:44:03.688029 containerd[1684]: time="2026-01-27T05:44:03.687593715Z" level=info msg="received container exit event container_id:\"a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872\" id:\"a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872\" pid:3234 exit_status:1 exited_at:{seconds:1769492643 nanos:687340757}" Jan 27 05:44:03.688000 audit: BPF prog-id=146 op=UNLOAD Jan 27 05:44:03.688000 audit: BPF prog-id=150 op=UNLOAD Jan 27 05:44:03.718538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872-rootfs.mount: Deactivated successfully. Jan 27 05:44:03.725032 containerd[1684]: time="2026-01-27T05:44:03.724990488Z" level=info msg="StartContainer for \"8d5f2c2413e3b1725bc9394fabc0fb955bd7deaa4a516aa6c4ac184320f727c5\" returns successfully" Jan 27 05:44:04.575099 kubelet[2893]: I0127 05:44:04.575072 2893 scope.go:117] "RemoveContainer" containerID="a481931004521609d16fca706e0417c4cac967026e3e15180a57432e6c00f872" Jan 27 05:44:04.576523 containerd[1684]: time="2026-01-27T05:44:04.576493098Z" level=info msg="CreateContainer within sandbox \"399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 27 05:44:04.594949 containerd[1684]: time="2026-01-27T05:44:04.594399869Z" level=info msg="Container c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:44:04.597071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671953814.mount: Deactivated successfully. Jan 27 05:44:04.603286 containerd[1684]: time="2026-01-27T05:44:04.603255319Z" level=info msg="CreateContainer within sandbox \"399e5330febba48abf118707419b9a99f0dbd662f1542ca0fb03a0f15895e5f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9\"" Jan 27 05:44:04.603691 containerd[1684]: time="2026-01-27T05:44:04.603672155Z" level=info msg="StartContainer for \"c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9\"" Jan 27 05:44:04.604707 containerd[1684]: time="2026-01-27T05:44:04.604684990Z" level=info msg="connecting to shim c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9" address="unix:///run/containerd/s/16665a1858466eb826a91a7cbf7835abde715d8b4299b83d5b902434148520cd" protocol=ttrpc version=3 Jan 27 05:44:04.633311 systemd[1]: Started cri-containerd-c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9.scope - libcontainer container c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9. Jan 27 05:44:04.645000 audit: BPF prog-id=267 op=LOAD Jan 27 05:44:04.646000 audit: BPF prog-id=268 op=LOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=268 op=UNLOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=269 op=LOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=270 op=LOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=270 op=UNLOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=269 op=UNLOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.646000 audit: BPF prog-id=271 op=LOAD Jan 27 05:44:04.646000 audit[5534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3086 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:04.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653532353964373133663163366161353235333162656536643361 Jan 27 05:44:04.665749 containerd[1684]: time="2026-01-27T05:44:04.665717065Z" level=info msg="StartContainer for \"c4e5259d713f1c6aa52531bee6d3a715b4d362be1f97f61840b146285d4a8ea9\" returns successfully" Jan 27 05:44:05.975521 containerd[1684]: time="2026-01-27T05:44:05.975416085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:44:06.321323 containerd[1684]: time="2026-01-27T05:44:06.321270587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:44:06.323089 containerd[1684]: time="2026-01-27T05:44:06.323059633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:44:06.323227 containerd[1684]: time="2026-01-27T05:44:06.323130791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:44:06.323331 kubelet[2893]: E0127 05:44:06.323296 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:06.323772 kubelet[2893]: E0127 05:44:06.323335 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:06.323834 containerd[1684]: time="2026-01-27T05:44:06.323754054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 05:44:06.323971 kubelet[2893]: E0127 05:44:06.323952 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-c8xsr_calico-apiserver(627719fb-0c0e-4f7d-a570-d19f7c72ca81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:44:06.324041 kubelet[2893]: E0127 05:44:06.323996 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-c8xsr" podUID="627719fb-0c0e-4f7d-a570-d19f7c72ca81" Jan 27 05:44:06.666288 containerd[1684]: time="2026-01-27T05:44:06.666122991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:44:06.669351 containerd[1684]: time="2026-01-27T05:44:06.669307430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 05:44:06.669472 containerd[1684]: time="2026-01-27T05:44:06.669403480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 05:44:06.670176 kubelet[2893]: E0127 05:44:06.669701 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:44:06.670176 kubelet[2893]: E0127 05:44:06.669772 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 05:44:06.670176 kubelet[2893]: E0127 05:44:06.669894 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 05:44:06.671120 containerd[1684]: time="2026-01-27T05:44:06.671092016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 05:44:07.018540 containerd[1684]: time="2026-01-27T05:44:07.018462302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:44:07.020629 containerd[1684]: time="2026-01-27T05:44:07.020542643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 05:44:07.020722 containerd[1684]: time="2026-01-27T05:44:07.020590756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 05:44:07.021153 kubelet[2893]: E0127 05:44:07.021068 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:44:07.021217 kubelet[2893]: E0127 05:44:07.021172 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 05:44:07.021489 kubelet[2893]: E0127 05:44:07.021463 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gt29m_calico-system(2fd25125-023e-4bbf-9ed8-e267fcf6bfb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 05:44:07.021593 kubelet[2893]: E0127 05:44:07.021552 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gt29m" podUID="2fd25125-023e-4bbf-9ed8-e267fcf6bfb3" Jan 27 05:44:07.976087 kubelet[2893]: E0127 05:44:07.975983 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4t22j" podUID="e8248550-dadc-499c-aab6-b47350ead3d7" Jan 27 05:44:08.494151 systemd[1]: cri-containerd-8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad.scope: Deactivated successfully. Jan 27 05:44:08.494746 containerd[1684]: time="2026-01-27T05:44:08.494123265Z" level=info msg="received container exit event container_id:\"8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad\" id:\"8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad\" pid:2725 exit_status:1 exited_at:{seconds:1769492648 nanos:493764868}" Jan 27 05:44:08.494550 systemd[1]: cri-containerd-8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad.scope: Consumed 2.629s CPU time, 23.2M memory peak, 128K read from disk. Jan 27 05:44:08.494000 audit: BPF prog-id=272 op=LOAD Jan 27 05:44:08.496286 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 27 05:44:08.496353 kernel: audit: type=1334 audit(1769492648.494:910): prog-id=272 op=LOAD Jan 27 05:44:08.498000 audit: BPF prog-id=88 op=UNLOAD Jan 27 05:44:08.501000 audit: BPF prog-id=103 op=UNLOAD Jan 27 05:44:08.503778 kernel: audit: type=1334 audit(1769492648.498:911): prog-id=88 op=UNLOAD Jan 27 05:44:08.503841 kernel: audit: type=1334 audit(1769492648.501:912): prog-id=103 op=UNLOAD Jan 27 05:44:08.501000 audit: BPF prog-id=107 op=UNLOAD Jan 27 05:44:08.505854 kernel: audit: type=1334 audit(1769492648.501:913): prog-id=107 op=UNLOAD Jan 27 05:44:08.529545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad-rootfs.mount: Deactivated successfully. Jan 27 05:44:08.590940 kubelet[2893]: I0127 05:44:08.590898 2893 scope.go:117] "RemoveContainer" containerID="8c2fff006b697f2ad7c416f96ffb0a734bcef3f3ba2f7f2fb4499611da1572ad" Jan 27 05:44:08.594382 containerd[1684]: time="2026-01-27T05:44:08.594346050Z" level=info msg="CreateContainer within sandbox \"d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 27 05:44:08.617653 containerd[1684]: time="2026-01-27T05:44:08.617351099Z" level=info msg="Container d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f: CDI devices from CRI Config.CDIDevices: []" Jan 27 05:44:08.627131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545085742.mount: Deactivated successfully. Jan 27 05:44:08.634072 containerd[1684]: time="2026-01-27T05:44:08.633972320Z" level=info msg="CreateContainer within sandbox \"d93e21f4eeb0618810e7972e3761d4187bc09a674fdc61aa7e5d9f5902c52845\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f\"" Jan 27 05:44:08.635072 containerd[1684]: time="2026-01-27T05:44:08.634861927Z" level=info msg="StartContainer for \"d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f\"" Jan 27 05:44:08.635903 containerd[1684]: time="2026-01-27T05:44:08.635875443Z" level=info msg="connecting to shim d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f" address="unix:///run/containerd/s/79e2160fa7294fc513102a87e507973b708b698f4b63f66045cac859fbeb4e3c" protocol=ttrpc version=3 Jan 27 05:44:08.657244 systemd[1]: Started cri-containerd-d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f.scope - libcontainer container d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f. Jan 27 05:44:08.669000 audit: BPF prog-id=273 op=LOAD Jan 27 05:44:08.673067 kernel: audit: type=1334 audit(1769492648.669:914): prog-id=273 op=LOAD Jan 27 05:44:08.671000 audit: BPF prog-id=274 op=LOAD Jan 27 05:44:08.675110 kernel: audit: type=1334 audit(1769492648.671:915): prog-id=274 op=LOAD Jan 27 05:44:08.671000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.682633 kernel: audit: type=1300 audit(1769492648.671:915): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.682700 kernel: audit: type=1327 audit(1769492648.671:915): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=274 op=UNLOAD Jan 27 05:44:08.694807 kernel: audit: type=1334 audit(1769492648.672:916): prog-id=274 op=UNLOAD Jan 27 05:44:08.694861 kernel: audit: type=1300 audit(1769492648.672:916): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=275 op=LOAD Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=276 op=LOAD Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=276 op=UNLOAD Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=275 op=UNLOAD Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.672000 audit: BPF prog-id=277 op=LOAD Jan 27 05:44:08.672000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2593 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 05:44:08.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435386633666134356132643562666133613062393039623261376434 Jan 27 05:44:08.725547 containerd[1684]: time="2026-01-27T05:44:08.725437544Z" level=info msg="StartContainer for \"d58f3fa45a2d5bfa3a0b909b2a7d4cfde596f02c1e8fe77b895820a245cd5e7f\" returns successfully" Jan 27 05:44:09.975748 kubelet[2893]: E0127 05:44:09.975681 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65f4f5cc45-xxl57" podUID="a4123b97-2d89-42ef-9011-27c5f71176fd" Jan 27 05:44:10.975708 containerd[1684]: time="2026-01-27T05:44:10.975631739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:44:11.298573 containerd[1684]: time="2026-01-27T05:44:11.298300998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:44:11.300701 containerd[1684]: time="2026-01-27T05:44:11.300534155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:44:11.300701 containerd[1684]: time="2026-01-27T05:44:11.300560999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:44:11.300902 kubelet[2893]: E0127 05:44:11.300774 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:11.300902 kubelet[2893]: E0127 05:44:11.300810 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:11.300902 kubelet[2893]: E0127 05:44:11.300875 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5b4f587895-dm4v4_calico-apiserver(5de8dbc5-cf50-4a41-99c0-153b9e80ac79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:44:11.301582 kubelet[2893]: E0127 05:44:11.300904 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b4f587895-dm4v4" podUID="5de8dbc5-cf50-4a41-99c0-153b9e80ac79" Jan 27 05:44:12.975233 containerd[1684]: time="2026-01-27T05:44:12.975190923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 05:44:13.312584 containerd[1684]: time="2026-01-27T05:44:13.312347886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 05:44:13.314791 containerd[1684]: time="2026-01-27T05:44:13.314681292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 05:44:13.314893 containerd[1684]: time="2026-01-27T05:44:13.314784822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 05:44:13.315244 kubelet[2893]: E0127 05:44:13.315156 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:13.315244 kubelet[2893]: E0127 05:44:13.315222 2893 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 05:44:13.315676 kubelet[2893]: E0127 05:44:13.315331 2893 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64987cbdf8-thlkg_calico-apiserver(37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 05:44:13.315676 kubelet[2893]: E0127 05:44:13.315373 2893 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64987cbdf8-thlkg" podUID="37c3a4c8-acf6-4f56-beff-6dcab7eb2ee8" Jan 27 05:44:13.359507 kubelet[2893]: E0127 05:44:13.359293 2893 controller.go:195] "Failed to update lease" err="Put \"https://10.0.2.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4592-0-0-n-5ca0d578df?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"