Jan 19 11:59:43.330343 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 09:38:41 -00 2026 Jan 19 11:59:43.330366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 11:59:43.330374 kernel: BIOS-provided physical RAM map: Jan 19 11:59:43.330383 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 19 11:59:43.330389 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 19 11:59:43.330395 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 19 11:59:43.330402 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 19 11:59:43.330408 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 19 11:59:43.330414 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 19 11:59:43.330420 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 19 11:59:43.330426 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 19 11:59:43.330434 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 19 11:59:43.330440 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 19 11:59:43.330446 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 19 11:59:43.330453 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 19 11:59:43.330460 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 19 11:59:43.330468 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 19 11:59:43.330475 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 19 11:59:43.330481 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 19 11:59:43.330487 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 19 11:59:43.330494 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 19 11:59:43.330500 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 19 11:59:43.330506 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 19 11:59:43.330513 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 19 11:59:43.330519 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 19 11:59:43.330525 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 19 11:59:43.330534 kernel: NX (Execute Disable) protection: active Jan 19 11:59:43.330540 kernel: APIC: Static calls initialized Jan 19 11:59:43.330546 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 19 11:59:43.330553 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 19 11:59:43.330559 kernel: extended physical RAM map: Jan 19 11:59:43.330566 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 19 11:59:43.330572 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 19 11:59:43.330578 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 19 11:59:43.330585 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 19 11:59:43.330591 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 19 11:59:43.330598 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 19 11:59:43.330606 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 19 11:59:43.330612 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 19 11:59:43.330619 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 19 11:59:43.330628 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 19 11:59:43.330637 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 19 11:59:43.330644 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 19 11:59:43.330651 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 19 11:59:43.330657 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 19 11:59:43.330664 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 19 11:59:43.330671 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 19 11:59:43.330678 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 19 11:59:43.330684 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 19 11:59:43.330691 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 19 11:59:43.330700 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 19 11:59:43.330707 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 19 11:59:43.330713 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 19 11:59:43.330720 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 19 11:59:43.330727 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 19 11:59:43.330733 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 19 11:59:43.330740 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 19 11:59:43.330747 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 19 11:59:43.330754 kernel: efi: EFI v2.7 by EDK II Jan 19 11:59:43.330761 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 19 11:59:43.330767 kernel: random: crng init done Jan 19 11:59:43.330776 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 19 11:59:43.330783 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 19 11:59:43.330790 kernel: secureboot: Secure boot disabled Jan 19 11:59:43.330796 kernel: SMBIOS 2.8 present. Jan 19 11:59:43.330803 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 19 11:59:43.330810 kernel: DMI: Memory slots populated: 1/1 Jan 19 11:59:43.330816 kernel: Hypervisor detected: KVM Jan 19 11:59:43.330823 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 19 11:59:43.330830 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 19 11:59:43.330836 kernel: kvm-clock: using sched offset of 11888754033 cycles Jan 19 11:59:43.330843 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 19 11:59:43.330853 kernel: tsc: Detected 2445.426 MHz processor Jan 19 11:59:43.330860 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 19 11:59:43.330867 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 19 11:59:43.330874 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 19 11:59:43.330881 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 19 11:59:43.330888 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 19 11:59:43.330895 kernel: Using GB pages for direct mapping Jan 19 11:59:43.330904 kernel: ACPI: Early table checksum verification disabled Jan 19 11:59:43.330911 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 19 11:59:43.330918 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 19 11:59:43.330925 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.330932 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.330939 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 19 11:59:43.330946 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.330953 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.331292 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.331300 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 11:59:43.331307 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 19 11:59:43.331314 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 19 11:59:43.331321 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 19 11:59:43.331328 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 19 11:59:43.331335 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 19 11:59:43.331344 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 19 11:59:43.331351 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 19 11:59:43.331358 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 19 11:59:43.331365 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 19 11:59:43.331372 kernel: No NUMA configuration found Jan 19 11:59:43.331379 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 19 11:59:43.331386 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 19 11:59:43.331395 kernel: Zone ranges: Jan 19 11:59:43.331402 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 19 11:59:43.331409 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 19 11:59:43.331416 kernel: Normal empty Jan 19 11:59:43.331423 kernel: Device empty Jan 19 11:59:43.331430 kernel: Movable zone start for each node Jan 19 11:59:43.331436 kernel: Early memory node ranges Jan 19 11:59:43.331443 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 19 11:59:43.331452 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 19 11:59:43.331459 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 19 11:59:43.331466 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 19 11:59:43.331473 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 19 11:59:43.331480 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 19 11:59:43.331486 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 19 11:59:43.331493 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 19 11:59:43.331500 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 19 11:59:43.331509 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 19 11:59:43.331523 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 19 11:59:43.331643 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 19 11:59:43.331650 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 19 11:59:43.331657 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 19 11:59:43.331664 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 19 11:59:43.331672 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 19 11:59:43.331679 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 19 11:59:43.331686 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 19 11:59:43.331696 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 19 11:59:43.331703 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 19 11:59:43.331711 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 19 11:59:43.331718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 19 11:59:43.331727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 19 11:59:43.331735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 19 11:59:43.331742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 19 11:59:43.331749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 19 11:59:43.331756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 19 11:59:43.331763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 19 11:59:43.331770 kernel: TSC deadline timer available Jan 19 11:59:43.331777 kernel: CPU topo: Max. logical packages: 1 Jan 19 11:59:43.331787 kernel: CPU topo: Max. logical dies: 1 Jan 19 11:59:43.331794 kernel: CPU topo: Max. dies per package: 1 Jan 19 11:59:43.331801 kernel: CPU topo: Max. threads per core: 1 Jan 19 11:59:43.331808 kernel: CPU topo: Num. cores per package: 4 Jan 19 11:59:43.331815 kernel: CPU topo: Num. threads per package: 4 Jan 19 11:59:43.331822 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 19 11:59:43.331829 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 19 11:59:43.331837 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 19 11:59:43.331846 kernel: kvm-guest: setup PV sched yield Jan 19 11:59:43.331853 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 19 11:59:43.331860 kernel: Booting paravirtualized kernel on KVM Jan 19 11:59:43.331868 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 19 11:59:43.331875 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 19 11:59:43.331883 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 19 11:59:43.331890 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 19 11:59:43.331899 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 19 11:59:43.331907 kernel: kvm-guest: PV spinlocks enabled Jan 19 11:59:43.331914 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 19 11:59:43.331922 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 11:59:43.331930 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 19 11:59:43.331937 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 19 11:59:43.331946 kernel: Fallback order for Node 0: 0 Jan 19 11:59:43.332271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 19 11:59:43.332281 kernel: Policy zone: DMA32 Jan 19 11:59:43.332289 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 19 11:59:43.332296 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 19 11:59:43.332303 kernel: ftrace: allocating 40128 entries in 157 pages Jan 19 11:59:43.332310 kernel: ftrace: allocated 157 pages with 5 groups Jan 19 11:59:43.332318 kernel: Dynamic Preempt: voluntary Jan 19 11:59:43.332328 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 19 11:59:43.332336 kernel: rcu: RCU event tracing is enabled. Jan 19 11:59:43.332343 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 19 11:59:43.332350 kernel: Trampoline variant of Tasks RCU enabled. Jan 19 11:59:43.332358 kernel: Rude variant of Tasks RCU enabled. Jan 19 11:59:43.332365 kernel: Tracing variant of Tasks RCU enabled. Jan 19 11:59:43.332372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 19 11:59:43.332379 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 19 11:59:43.332388 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 11:59:43.332396 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 11:59:43.332403 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 19 11:59:43.332410 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 19 11:59:43.332417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 19 11:59:43.332425 kernel: Console: colour dummy device 80x25 Jan 19 11:59:43.332432 kernel: printk: legacy console [ttyS0] enabled Jan 19 11:59:43.332441 kernel: ACPI: Core revision 20240827 Jan 19 11:59:43.332448 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 19 11:59:43.332455 kernel: APIC: Switch to symmetric I/O mode setup Jan 19 11:59:43.332463 kernel: x2apic enabled Jan 19 11:59:43.332470 kernel: APIC: Switched APIC routing to: physical x2apic Jan 19 11:59:43.332477 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 19 11:59:43.332484 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 19 11:59:43.332494 kernel: kvm-guest: setup PV IPIs Jan 19 11:59:43.332501 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 19 11:59:43.332508 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 19 11:59:43.332515 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 19 11:59:43.332523 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 19 11:59:43.332530 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 19 11:59:43.332537 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 19 11:59:43.332546 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 19 11:59:43.332553 kernel: Spectre V2 : Mitigation: Retpolines Jan 19 11:59:43.332561 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 19 11:59:43.332568 kernel: Speculative Store Bypass: Vulnerable Jan 19 11:59:43.332575 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 19 11:59:43.332583 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 19 11:59:43.332590 kernel: active return thunk: srso_alias_return_thunk Jan 19 11:59:43.332599 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 19 11:59:43.332607 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 19 11:59:43.332614 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 19 11:59:43.332621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 19 11:59:43.332628 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 19 11:59:43.332636 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 19 11:59:43.332643 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 19 11:59:43.332652 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 19 11:59:43.332659 kernel: Freeing SMP alternatives memory: 32K Jan 19 11:59:43.332666 kernel: pid_max: default: 32768 minimum: 301 Jan 19 11:59:43.332674 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 19 11:59:43.332681 kernel: landlock: Up and running. Jan 19 11:59:43.332688 kernel: SELinux: Initializing. Jan 19 11:59:43.332695 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 19 11:59:43.332704 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 19 11:59:43.332711 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 19 11:59:43.332719 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 19 11:59:43.332726 kernel: signal: max sigframe size: 1776 Jan 19 11:59:43.332733 kernel: rcu: Hierarchical SRCU implementation. Jan 19 11:59:43.332740 kernel: rcu: Max phase no-delay instances is 400. Jan 19 11:59:43.332747 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 19 11:59:43.332757 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 19 11:59:43.332764 kernel: smp: Bringing up secondary CPUs ... Jan 19 11:59:43.332771 kernel: smpboot: x86: Booting SMP configuration: Jan 19 11:59:43.332778 kernel: .... node #0, CPUs: #1 #2 #3 Jan 19 11:59:43.332785 kernel: smp: Brought up 1 node, 4 CPUs Jan 19 11:59:43.332792 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 19 11:59:43.332800 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Jan 19 11:59:43.332809 kernel: devtmpfs: initialized Jan 19 11:59:43.332816 kernel: x86/mm: Memory block size: 128MB Jan 19 11:59:43.332823 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 19 11:59:43.332831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 19 11:59:43.332838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 19 11:59:43.332845 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 19 11:59:43.332852 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 19 11:59:43.332862 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 19 11:59:43.332869 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 19 11:59:43.332876 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 19 11:59:43.332883 kernel: pinctrl core: initialized pinctrl subsystem Jan 19 11:59:43.332891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 19 11:59:43.332898 kernel: audit: initializing netlink subsys (disabled) Jan 19 11:59:43.332905 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 19 11:59:43.332915 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 19 11:59:43.332922 kernel: audit: type=2000 audit(1768823967.366:1): state=initialized audit_enabled=0 res=1 Jan 19 11:59:43.332929 kernel: cpuidle: using governor menu Jan 19 11:59:43.332936 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 19 11:59:43.332944 kernel: dca service started, version 1.12.1 Jan 19 11:59:43.332951 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 19 11:59:43.333275 kernel: PCI: Using configuration type 1 for base access Jan 19 11:59:43.333285 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 19 11:59:43.333293 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 19 11:59:43.333300 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 19 11:59:43.333308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 19 11:59:43.333315 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 19 11:59:43.333322 kernel: ACPI: Added _OSI(Module Device) Jan 19 11:59:43.333329 kernel: ACPI: Added _OSI(Processor Device) Jan 19 11:59:43.333339 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 19 11:59:43.333346 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 19 11:59:43.333353 kernel: ACPI: Interpreter enabled Jan 19 11:59:43.333360 kernel: ACPI: PM: (supports S0 S3 S5) Jan 19 11:59:43.333367 kernel: ACPI: Using IOAPIC for interrupt routing Jan 19 11:59:43.333375 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 19 11:59:43.333382 kernel: PCI: Using E820 reservations for host bridge windows Jan 19 11:59:43.333391 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 19 11:59:43.333398 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 19 11:59:43.333633 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 19 11:59:43.333820 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 19 11:59:43.334339 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 19 11:59:43.334352 kernel: PCI host bridge to bus 0000:00 Jan 19 11:59:43.334531 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 19 11:59:43.334688 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 19 11:59:43.334843 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 19 11:59:43.335338 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 19 11:59:43.335498 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 19 11:59:43.335652 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 19 11:59:43.335811 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 19 11:59:43.336342 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 19 11:59:43.336526 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 19 11:59:43.336703 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 19 11:59:43.336872 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 19 11:59:43.337446 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 19 11:59:43.337618 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 19 11:59:43.337786 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 20507 usecs Jan 19 11:59:43.338369 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 19 11:59:43.338594 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 19 11:59:43.338772 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 19 11:59:43.338940 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 19 11:59:43.341398 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 19 11:59:43.341575 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 19 11:59:43.341748 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 19 11:59:43.341916 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 19 11:59:43.342531 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 19 11:59:43.342701 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 19 11:59:43.342869 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 19 11:59:43.343472 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 19 11:59:43.343645 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 19 11:59:43.343820 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 19 11:59:43.344333 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 19 11:59:43.344507 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 22460 usecs Jan 19 11:59:43.344682 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 19 11:59:43.344849 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 19 11:59:43.345363 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 19 11:59:43.345545 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 19 11:59:43.345718 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 19 11:59:43.345729 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 19 11:59:43.345737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 19 11:59:43.345744 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 19 11:59:43.345752 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 19 11:59:43.345759 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 19 11:59:43.345769 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 19 11:59:43.345777 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 19 11:59:43.345784 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 19 11:59:43.345792 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 19 11:59:43.345799 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 19 11:59:43.345807 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 19 11:59:43.345814 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 19 11:59:43.345824 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 19 11:59:43.345831 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 19 11:59:43.345839 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 19 11:59:43.345846 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 19 11:59:43.345853 kernel: iommu: Default domain type: Translated Jan 19 11:59:43.345860 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 19 11:59:43.345868 kernel: efivars: Registered efivars operations Jan 19 11:59:43.345877 kernel: PCI: Using ACPI for IRQ routing Jan 19 11:59:43.345884 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 19 11:59:43.345892 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 19 11:59:43.345899 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 19 11:59:43.345906 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 19 11:59:43.345914 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 19 11:59:43.345921 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 19 11:59:43.345930 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 19 11:59:43.345937 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 19 11:59:43.345944 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 19 11:59:43.349411 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 19 11:59:43.349581 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 19 11:59:43.349748 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 19 11:59:43.349759 kernel: vgaarb: loaded Jan 19 11:59:43.349772 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 19 11:59:43.349779 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 19 11:59:43.349787 kernel: clocksource: Switched to clocksource kvm-clock Jan 19 11:59:43.349795 kernel: VFS: Disk quotas dquot_6.6.0 Jan 19 11:59:43.349802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 19 11:59:43.349810 kernel: pnp: PnP ACPI init Jan 19 11:59:43.350332 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 19 11:59:43.350350 kernel: pnp: PnP ACPI: found 6 devices Jan 19 11:59:43.350358 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 19 11:59:43.350366 kernel: NET: Registered PF_INET protocol family Jan 19 11:59:43.350373 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 19 11:59:43.350381 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 19 11:59:43.350403 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 19 11:59:43.350415 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 19 11:59:43.350423 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 19 11:59:43.350431 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 19 11:59:43.350438 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 19 11:59:43.350446 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 19 11:59:43.350454 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 19 11:59:43.350462 kernel: NET: Registered PF_XDP protocol family Jan 19 11:59:43.350635 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 19 11:59:43.350804 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 19 11:59:43.351304 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 19 11:59:43.351471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 19 11:59:43.351626 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 19 11:59:43.351781 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 19 11:59:43.351943 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 19 11:59:43.352446 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 19 11:59:43.352459 kernel: PCI: CLS 0 bytes, default 64 Jan 19 11:59:43.352467 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 19 11:59:43.352475 kernel: Initialise system trusted keyrings Jan 19 11:59:43.352483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 19 11:59:43.352490 kernel: Key type asymmetric registered Jan 19 11:59:43.352502 kernel: Asymmetric key parser 'x509' registered Jan 19 11:59:43.352509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 19 11:59:43.352517 kernel: io scheduler mq-deadline registered Jan 19 11:59:43.352525 kernel: io scheduler kyber registered Jan 19 11:59:43.352532 kernel: io scheduler bfq registered Jan 19 11:59:43.352540 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 19 11:59:43.352548 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 19 11:59:43.352558 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 19 11:59:43.352566 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 19 11:59:43.352574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 19 11:59:43.352582 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 19 11:59:43.352589 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 19 11:59:43.352599 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 19 11:59:43.352607 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 19 11:59:43.352779 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 19 11:59:43.352790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 19 11:59:43.352950 kernel: rtc_cmos 00:04: registered as rtc0 Jan 19 11:59:43.353482 kernel: rtc_cmos 00:04: setting system clock to 2026-01-19T11:59:37 UTC (1768823977) Jan 19 11:59:43.353648 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 19 11:59:43.353659 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 19 11:59:43.353667 kernel: efifb: probing for efifb Jan 19 11:59:43.353675 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 19 11:59:43.353683 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 19 11:59:43.353690 kernel: efifb: scrolling: redraw Jan 19 11:59:43.353698 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 19 11:59:43.353709 kernel: Console: switching to colour frame buffer device 160x50 Jan 19 11:59:43.353716 kernel: fb0: EFI VGA frame buffer device Jan 19 11:59:43.353724 kernel: pstore: Using crash dump compression: deflate Jan 19 11:59:43.353732 kernel: pstore: Registered efi_pstore as persistent store backend Jan 19 11:59:43.353739 kernel: NET: Registered PF_INET6 protocol family Jan 19 11:59:43.353747 kernel: Segment Routing with IPv6 Jan 19 11:59:43.353755 kernel: In-situ OAM (IOAM) with IPv6 Jan 19 11:59:43.353762 kernel: NET: Registered PF_PACKET protocol family Jan 19 11:59:43.353771 kernel: Key type dns_resolver registered Jan 19 11:59:43.353779 kernel: IPI shorthand broadcast: enabled Jan 19 11:59:43.353787 kernel: sched_clock: Marking stable (7057115183, 2542405830)->(10990332926, -1390811913) Jan 19 11:59:43.353795 kernel: registered taskstats version 1 Jan 19 11:59:43.353802 kernel: Loading compiled-in X.509 certificates Jan 19 11:59:43.353810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ba909111c102256a4abe14f4fc03cb5c21d9fa72' Jan 19 11:59:43.353818 kernel: Demotion targets for Node 0: null Jan 19 11:59:43.353827 kernel: Key type .fscrypt registered Jan 19 11:59:43.353835 kernel: Key type fscrypt-provisioning registered Jan 19 11:59:43.353842 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 19 11:59:43.353850 kernel: ima: Allocated hash algorithm: sha1 Jan 19 11:59:43.353858 kernel: ima: No architecture policies found Jan 19 11:59:43.353865 kernel: clk: Disabling unused clocks Jan 19 11:59:43.353873 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 19 11:59:43.353882 kernel: Write protecting the kernel read-only data: 47104k Jan 19 11:59:43.353890 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 19 11:59:43.353897 kernel: Run /init as init process Jan 19 11:59:43.353905 kernel: with arguments: Jan 19 11:59:43.353913 kernel: /init Jan 19 11:59:43.353920 kernel: with environment: Jan 19 11:59:43.353928 kernel: HOME=/ Jan 19 11:59:43.353937 kernel: TERM=linux Jan 19 11:59:43.353945 kernel: SCSI subsystem initialized Jan 19 11:59:43.353952 kernel: libata version 3.00 loaded. Jan 19 11:59:43.354471 kernel: ahci 0000:00:1f.2: version 3.0 Jan 19 11:59:43.354482 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 19 11:59:43.354647 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 19 11:59:43.354815 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 19 11:59:43.355329 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 19 11:59:43.355526 kernel: scsi host0: ahci Jan 19 11:59:43.355706 kernel: scsi host1: ahci Jan 19 11:59:43.355884 kernel: scsi host2: ahci Jan 19 11:59:43.356496 kernel: scsi host3: ahci Jan 19 11:59:43.356682 kernel: scsi host4: ahci Jan 19 11:59:43.356862 kernel: scsi host5: ahci Jan 19 11:59:43.356873 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 19 11:59:43.356882 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 19 11:59:43.356890 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 19 11:59:43.356898 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 19 11:59:43.356908 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 19 11:59:43.356916 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 19 11:59:43.356924 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 19 11:59:43.356932 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 19 11:59:43.356939 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 19 11:59:43.356947 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 19 11:59:43.357284 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 19 11:59:43.357297 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 19 11:59:43.357305 kernel: ata3.00: LPM support broken, forcing max_power Jan 19 11:59:43.357313 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 19 11:59:43.357321 kernel: ata3.00: applying bridge limits Jan 19 11:59:43.357328 kernel: ata3.00: LPM support broken, forcing max_power Jan 19 11:59:43.357336 kernel: ata3.00: configured for UDMA/100 Jan 19 11:59:43.357542 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 19 11:59:43.357729 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 19 11:59:43.357897 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 19 11:59:43.361436 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 19 11:59:43.361452 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 19 11:59:43.361461 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 19 11:59:43.361469 kernel: GPT:16515071 != 27000831 Jan 19 11:59:43.361481 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 19 11:59:43.361488 kernel: GPT:16515071 != 27000831 Jan 19 11:59:43.361496 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 19 11:59:43.361503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 19 11:59:43.361696 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 19 11:59:43.361711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 19 11:59:43.361719 kernel: device-mapper: uevent: version 1.0.3 Jan 19 11:59:43.361729 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 19 11:59:43.361737 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 19 11:59:43.361745 kernel: raid6: avx2x4 gen() 21540 MB/s Jan 19 11:59:43.361753 kernel: raid6: avx2x2 gen() 22877 MB/s Jan 19 11:59:43.361760 kernel: raid6: avx2x1 gen() 17093 MB/s Jan 19 11:59:43.361768 kernel: raid6: using algorithm avx2x2 gen() 22877 MB/s Jan 19 11:59:43.361775 kernel: raid6: .... xor() 18565 MB/s, rmw enabled Jan 19 11:59:43.361787 kernel: raid6: using avx2x2 recovery algorithm Jan 19 11:59:43.361795 kernel: xor: automatically using best checksumming function avx Jan 19 11:59:43.361803 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 19 11:59:43.361810 kernel: BTRFS: device fsid 163044fe-e6e3-4007-9021-e65918f0e7ac devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 19 11:59:43.361818 kernel: BTRFS info (device dm-0): first mount of filesystem 163044fe-e6e3-4007-9021-e65918f0e7ac Jan 19 11:59:43.361826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 19 11:59:43.362494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 19 11:59:43.362509 kernel: BTRFS info (device dm-0): enabling free space tree Jan 19 11:59:43.362517 kernel: loop: module loaded Jan 19 11:59:43.362525 kernel: loop0: detected capacity change from 0 to 100552 Jan 19 11:59:43.362533 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 19 11:59:43.362542 systemd[1]: Successfully made /usr/ read-only. Jan 19 11:59:43.362553 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 11:59:43.362564 systemd[1]: Detected virtualization kvm. Jan 19 11:59:43.362572 systemd[1]: Detected architecture x86-64. Jan 19 11:59:43.362580 systemd[1]: Running in initrd. Jan 19 11:59:43.362588 systemd[1]: No hostname configured, using default hostname. Jan 19 11:59:43.362708 systemd[1]: Hostname set to . Jan 19 11:59:43.362716 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 11:59:43.362724 systemd[1]: Queued start job for default target initrd.target. Jan 19 11:59:43.362735 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 11:59:43.362744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 11:59:43.362752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 11:59:43.362761 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 19 11:59:43.362769 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 11:59:43.362778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 19 11:59:43.362789 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 19 11:59:43.362797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 11:59:43.362805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 11:59:43.362814 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 19 11:59:43.362822 systemd[1]: Reached target paths.target - Path Units. Jan 19 11:59:43.362830 systemd[1]: Reached target slices.target - Slice Units. Jan 19 11:59:43.362840 systemd[1]: Reached target swap.target - Swaps. Jan 19 11:59:43.362849 systemd[1]: Reached target timers.target - Timer Units. Jan 19 11:59:43.362857 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 11:59:43.362865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 11:59:43.362873 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 11:59:43.362881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 19 11:59:43.362890 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 19 11:59:43.362900 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 11:59:43.362908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 11:59:43.362920 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 11:59:43.362929 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 11:59:43.362937 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 19 11:59:43.362946 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 19 11:59:43.363364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 11:59:43.363378 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 19 11:59:43.363387 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 19 11:59:43.363396 systemd[1]: Starting systemd-fsck-usr.service... Jan 19 11:59:43.363404 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 11:59:43.363413 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 11:59:43.363424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 11:59:43.363432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 19 11:59:43.363440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 11:59:43.363448 systemd[1]: Finished systemd-fsck-usr.service. Jan 19 11:59:43.363481 systemd-journald[320]: Collecting audit messages is enabled. Jan 19 11:59:43.363504 kernel: audit: type=1130 audit(1768823983.328:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.363513 systemd-journald[320]: Journal started Jan 19 11:59:43.363532 systemd-journald[320]: Runtime Journal (/run/log/journal/c7cf1b80cc4246da8dba05b9d0a95232) is 6M, max 48M, 42M free. Jan 19 11:59:43.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.387451 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 19 11:59:43.418752 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 11:59:43.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.444525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 11:59:43.506911 kernel: audit: type=1130 audit(1768823983.438:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.578286 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 19 11:59:43.584713 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 19 11:59:43.591922 kernel: Bridge firewalling registered Jan 19 11:59:43.616812 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 19 11:59:43.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.632786 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 11:59:43.763565 kernel: audit: type=1130 audit(1768823983.654:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.763595 kernel: audit: type=1130 audit(1768823983.694:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.671544 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 11:59:43.764325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 11:59:43.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.863656 kernel: audit: type=1130 audit(1768823983.810:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.845352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 11:59:43.933847 kernel: audit: type=1130 audit(1768823983.863:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:43.867868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 19 11:59:43.976375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 11:59:43.979568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 11:59:44.040441 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 11:59:44.119487 kernel: audit: type=1130 audit(1768823984.058:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.120715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 11:59:44.184438 kernel: audit: type=1130 audit(1768823984.129:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.134373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 19 11:59:44.250357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 11:59:44.337386 kernel: audit: type=1130 audit(1768823984.264:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.337425 kernel: audit: type=1334 audit(1768823984.268:11): prog-id=6 op=LOAD Jan 19 11:59:44.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.268000 audit: BPF prog-id=6 op=LOAD Jan 19 11:59:44.337516 dracut-cmdline[354]: dracut-109 Jan 19 11:59:44.271546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 11:59:44.381821 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 11:59:44.564673 systemd-resolved[360]: Positive Trust Anchors: Jan 19 11:59:44.564812 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 11:59:44.564817 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 11:59:44.564844 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 11:59:44.612543 systemd-resolved[360]: Defaulting to hostname 'linux'. Jan 19 11:59:44.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:44.617586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 11:59:44.722723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 11:59:45.100357 kernel: Loading iSCSI transport class v2.0-870. Jan 19 11:59:45.147541 kernel: iscsi: registered transport (tcp) Jan 19 11:59:45.209535 kernel: iscsi: registered transport (qla4xxx) Jan 19 11:59:45.209690 kernel: QLogic iSCSI HBA Driver Jan 19 11:59:45.307740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 11:59:45.406854 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 11:59:45.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:45.452715 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 11:59:45.598390 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 19 11:59:45.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:45.618873 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 19 11:59:45.644714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 19 11:59:45.797495 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 19 11:59:45.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:45.821000 audit: BPF prog-id=7 op=LOAD Jan 19 11:59:45.821000 audit: BPF prog-id=8 op=LOAD Jan 19 11:59:45.823711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 11:59:45.936649 systemd-udevd[584]: Using default interface naming scheme 'v257'. Jan 19 11:59:45.970434 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 11:59:45.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.008767 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 19 11:59:46.138705 dracut-pre-trigger[620]: rd.md=0: removing MD RAID activation Jan 19 11:59:46.269972 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 11:59:46.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.310408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 11:59:46.362720 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 11:59:46.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.363000 audit: BPF prog-id=9 op=LOAD Jan 19 11:59:46.368422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 11:59:46.510346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 11:59:46.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.548584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 19 11:59:46.600876 systemd-networkd[725]: lo: Link UP Jan 19 11:59:46.600886 systemd-networkd[725]: lo: Gained carrier Jan 19 11:59:46.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.603642 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 11:59:46.628620 systemd[1]: Reached target network.target - Network. Jan 19 11:59:46.721835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 19 11:59:46.767789 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 19 11:59:46.817329 kernel: cryptd: max_cpu_qlen set to 1000 Jan 19 11:59:46.825891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 19 11:59:46.870486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 11:59:46.916893 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 19 11:59:46.940823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 11:59:46.943285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 11:59:46.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:46.992764 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 11:59:47.018404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 11:59:47.056813 disk-uuid[768]: Primary Header is updated. Jan 19 11:59:47.056813 disk-uuid[768]: Secondary Entries is updated. Jan 19 11:59:47.056813 disk-uuid[768]: Secondary Header is updated. Jan 19 11:59:47.147761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 11:59:47.162712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 11:59:47.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:47.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:47.227361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 11:59:47.277738 kernel: AES CTR mode by8 optimization enabled Jan 19 11:59:47.277760 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 19 11:59:47.348506 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 11:59:47.348627 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 11:59:47.352373 systemd-networkd[725]: eth0: Link UP Jan 19 11:59:47.352701 systemd-networkd[725]: eth0: Gained carrier Jan 19 11:59:47.352710 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 11:59:47.465556 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 19 11:59:47.466443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 11:59:47.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:47.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:47.556714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 19 11:59:47.572886 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 11:59:47.603780 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 11:59:47.636351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 11:59:47.640609 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 19 11:59:47.763821 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 19 11:59:47.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:48.197757 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 19 11:59:48.197757 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 19 11:59:48.197757 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 19 11:59:48.197757 disk-uuid[771]: The operation has completed successfully. Jan 19 11:59:48.288285 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 19 11:59:48.288743 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 19 11:59:48.326400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 19 11:59:48.510895 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 19 11:59:48.510938 kernel: audit: type=1130 audit(1768823988.320:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:48.510958 kernel: audit: type=1131 audit(1768823988.320:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:48.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:48.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:48.714669 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 19 11:59:48.753506 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 11:59:48.753580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 11:59:48.882663 kernel: BTRFS info (device vda6): turning on async discard Jan 19 11:59:48.882743 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 11:59:48.991338 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 11:59:49.040746 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 19 11:59:49.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:49.170926 kernel: audit: type=1130 audit(1768823989.105:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:49.172537 systemd-networkd[725]: eth0: Gained IPv6LL Jan 19 11:59:49.174380 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 19 11:59:49.655592 ignition[882]: Ignition 2.24.0 Jan 19 11:59:49.655728 ignition[882]: Stage: fetch-offline Jan 19 11:59:49.655779 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 19 11:59:49.655792 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 11:59:49.655892 ignition[882]: parsed url from cmdline: "" Jan 19 11:59:49.655897 ignition[882]: no config URL provided Jan 19 11:59:49.656522 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 19 11:59:49.656541 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 19 11:59:49.656606 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 19 11:59:49.656615 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 19 11:59:49.790629 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 19 11:59:51.412941 ignition[882]: parsing config with SHA512: c8f14ab4ca870ebebbf6d5700d7ff37339938a4bbea5e6d57e948ba94e85362a4d8582ac3f581089ff75404a328eac8b109cd4ca65dd02982c5e3a19735d26a2 Jan 19 11:59:51.428813 unknown[882]: fetched base config from "system" Jan 19 11:59:51.428824 unknown[882]: fetched user config from "qemu" Jan 19 11:59:51.434745 ignition[882]: fetch-offline: fetch-offline passed Jan 19 11:59:51.434834 ignition[882]: Ignition finished successfully Jan 19 11:59:51.490635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 11:59:51.510769 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 19 11:59:51.512797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 19 11:59:51.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:51.608531 kernel: audit: type=1130 audit(1768823991.509:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:51.723945 ignition[893]: Ignition 2.24.0 Jan 19 11:59:51.724568 ignition[893]: Stage: kargs Jan 19 11:59:51.724786 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 19 11:59:51.724801 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 11:59:51.726429 ignition[893]: kargs: kargs passed Jan 19 11:59:51.726482 ignition[893]: Ignition finished successfully Jan 19 11:59:51.809652 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 19 11:59:51.875292 kernel: audit: type=1130 audit(1768823991.828:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:51.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:51.877266 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 19 11:59:52.040295 ignition[901]: Ignition 2.24.0 Jan 19 11:59:52.040427 ignition[901]: Stage: disks Jan 19 11:59:52.040572 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 19 11:59:52.040582 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 11:59:52.099926 ignition[901]: disks: disks passed Jan 19 11:59:52.100602 ignition[901]: Ignition finished successfully Jan 19 11:59:52.130931 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 19 11:59:52.190241 kernel: audit: type=1130 audit(1768823992.148:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:52.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:52.149788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 19 11:59:52.196525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 19 11:59:52.234941 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 11:59:52.311498 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 11:59:52.311896 systemd[1]: Reached target basic.target - Basic System. Jan 19 11:59:52.347963 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 19 11:59:52.520958 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 19 11:59:52.549834 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 19 11:59:52.576562 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 19 11:59:52.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:52.656462 kernel: audit: type=1130 audit(1768823992.572:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:53.336365 kernel: EXT4-fs (vda9): mounted filesystem 94229029-29b7-42b8-a135-4530ccb5ed34 r/w with ordered data mode. Quota mode: none. Jan 19 11:59:53.338312 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 19 11:59:53.353564 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 19 11:59:53.406751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 11:59:53.454727 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 19 11:59:53.498538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 19 11:59:53.469601 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 19 11:59:53.469929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 19 11:59:53.469968 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 11:59:53.518967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 19 11:59:53.547876 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 19 11:59:53.696864 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 11:59:53.696890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 11:59:53.747467 kernel: BTRFS info (device vda6): turning on async discard Jan 19 11:59:53.747536 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 11:59:53.752738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 11:59:54.325507 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 19 11:59:54.394711 kernel: audit: type=1130 audit(1768823994.325:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.328857 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 19 11:59:54.440359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 19 11:59:54.476607 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 19 11:59:54.511640 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 11:59:54.628482 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 19 11:59:54.695978 kernel: audit: type=1130 audit(1768823994.641:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.696364 ignition[1018]: INFO : Ignition 2.24.0 Jan 19 11:59:54.696364 ignition[1018]: INFO : Stage: mount Jan 19 11:59:54.696364 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 11:59:54.696364 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 11:59:54.802802 kernel: audit: type=1130 audit(1768823994.748:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:54.802892 ignition[1018]: INFO : mount: mount passed Jan 19 11:59:54.802892 ignition[1018]: INFO : Ignition finished successfully Jan 19 11:59:54.740697 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 19 11:59:54.750959 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 19 11:59:54.918476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 11:59:55.028665 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1032) Jan 19 11:59:55.064551 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 11:59:55.064623 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 11:59:55.133440 kernel: BTRFS info (device vda6): turning on async discard Jan 19 11:59:55.133517 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 11:59:55.139921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 11:59:55.279823 ignition[1049]: INFO : Ignition 2.24.0 Jan 19 11:59:55.279823 ignition[1049]: INFO : Stage: files Jan 19 11:59:55.300873 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 11:59:55.300873 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 11:59:55.335670 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jan 19 11:59:55.350906 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 19 11:59:55.350906 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 19 11:59:55.387795 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 19 11:59:55.387795 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 19 11:59:55.387795 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 19 11:59:55.387795 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 19 11:59:55.387795 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 19 11:59:55.356698 unknown[1049]: wrote ssh authorized keys file for user: core Jan 19 11:59:55.602796 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 19 11:59:55.736351 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 19 11:59:55.736351 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 19 11:59:55.790907 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 19 11:59:56.410559 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 19 11:59:57.032286 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 19 11:59:57.032286 ignition[1049]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 19 11:59:57.089563 ignition[1049]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 19 11:59:57.298001 ignition[1049]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 19 11:59:57.335668 ignition[1049]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 19 11:59:57.335668 ignition[1049]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 19 11:59:57.335668 ignition[1049]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 19 11:59:57.335668 ignition[1049]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 19 11:59:57.428463 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 19 11:59:57.428463 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 19 11:59:57.428463 ignition[1049]: INFO : files: files passed Jan 19 11:59:57.428463 ignition[1049]: INFO : Ignition finished successfully Jan 19 11:59:57.489415 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 19 11:59:57.560475 kernel: audit: type=1130 audit(1768823997.515:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.518544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 19 11:59:57.567491 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 19 11:59:57.657899 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 19 11:59:57.658523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 19 11:59:57.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.721008 initrd-setup-root-after-ignition[1080]: grep: /sysroot/oem/oem-release: No such file or directory Jan 19 11:59:57.780465 kernel: audit: type=1130 audit(1768823997.704:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.780492 kernel: audit: type=1131 audit(1768823997.705:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.746906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 11:59:57.871799 kernel: audit: type=1130 audit(1768823997.818:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:57.871909 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 11:59:57.871909 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 19 11:59:57.819760 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 19 11:59:57.964631 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 11:59:57.890658 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 19 11:59:58.143936 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 19 11:59:58.144753 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 19 11:59:58.257534 kernel: audit: type=1130 audit(1768823998.178:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.257560 kernel: audit: type=1131 audit(1768823998.179:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.179697 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 19 11:59:58.259478 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 19 11:59:58.309563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 19 11:59:58.312684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 19 11:59:58.467843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 11:59:58.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.506643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 19 11:59:58.568680 kernel: audit: type=1130 audit(1768823998.501:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.619931 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 11:59:58.620682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 19 11:59:58.655728 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 11:59:58.688704 systemd[1]: Stopped target timers.target - Timer Units. Jan 19 11:59:58.730573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 19 11:59:58.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:58.730709 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 11:59:58.763653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 19 11:59:58.781857 systemd[1]: Stopped target basic.target - Basic System. Jan 19 11:59:58.837477 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 19 11:59:58.847770 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 11:59:58.879701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 19 11:59:58.909689 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 19 11:59:58.938628 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 19 11:59:58.970480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 11:59:59.005682 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 19 11:59:59.037422 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 19 11:59:59.072005 systemd[1]: Stopped target swap.target - Swaps. Jan 19 11:59:59.106550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 19 11:59:59.106886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 19 11:59:59.146679 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 19 11:59:59.160764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 11:59:59.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.207440 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 19 11:59:59.207839 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 11:59:59.232927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 19 11:59:59.233707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 19 11:59:59.394847 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 19 11:59:59.394878 kernel: audit: type=1131 audit(1768823999.291:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.345605 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 19 11:59:59.345846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 11:59:59.433946 systemd[1]: Stopped target paths.target - Path Units. Jan 19 11:59:59.544007 kernel: audit: type=1131 audit(1768823999.433:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.488785 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 19 11:59:59.493748 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 11:59:59.610885 systemd[1]: Stopped target slices.target - Slice Units. Jan 19 11:59:59.626886 systemd[1]: Stopped target sockets.target - Socket Units. Jan 19 11:59:59.636671 systemd[1]: iscsid.socket: Deactivated successfully. Jan 19 11:59:59.636769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 11:59:59.656837 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 19 11:59:59.656918 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 11:59:59.723538 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 19 11:59:59.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.723632 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 19 11:59:59.761814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 19 11:59:59.918833 kernel: audit: type=1131 audit(1768823999.810:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.761991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 11:59:59.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.810846 systemd[1]: ignition-files.service: Deactivated successfully. Jan 19 12:00:00.063977 kernel: audit: type=1131 audit(1768823999.883:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.064010 kernel: audit: type=1131 audit(1768824000.016:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.810955 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 19 12:00:00.076899 ignition[1106]: INFO : Ignition 2.24.0 Jan 19 12:00:00.076899 ignition[1106]: INFO : Stage: umount Jan 19 12:00:00.076899 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 12:00:00.076899 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 19 12:00:00.303695 kernel: audit: type=1131 audit(1768824000.105:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.303739 kernel: audit: type=1131 audit(1768824000.162:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.303762 kernel: audit: type=1131 audit(1768824000.214:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.888010 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 19 12:00:00.313858 ignition[1106]: INFO : umount: umount passed Jan 19 12:00:00.313858 ignition[1106]: INFO : Ignition finished successfully Jan 19 12:00:00.390718 kernel: audit: type=1131 audit(1768824000.317:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.919006 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 19 12:00:00.458645 kernel: audit: type=1130 audit(1768824000.390:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 11:59:59.919884 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 12:00:00.021439 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 19 12:00:00.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.076500 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 19 12:00:00.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.076723 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 12:00:00.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.105749 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 19 12:00:00.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.105893 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 12:00:00.163715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 19 12:00:00.163867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 12:00:00.277828 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 19 12:00:00.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.305484 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 19 12:00:00.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:00.386725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 19 12:00:00.386974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 19 12:00:00.394998 systemd[1]: Stopped target network.target - Network. Jan 19 12:00:00.459438 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 19 12:00:00.459517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 19 12:00:00.489484 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 19 12:00:00.489564 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 19 12:00:00.518901 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 19 12:00:00.518967 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 19 12:00:00.564740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 19 12:00:00.564795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 19 12:00:00.584656 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 19 12:00:00.646487 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 19 12:00:00.662728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 19 12:00:00.663838 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 19 12:00:00.664381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 19 12:00:00.699545 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 19 12:00:00.699608 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 19 12:00:01.047952 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 19 12:00:01.048763 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 19 12:00:01.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.097740 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 19 12:00:01.098441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 19 12:00:01.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.149000 audit: BPF prog-id=6 op=UNLOAD Jan 19 12:00:01.149000 audit: BPF prog-id=9 op=UNLOAD Jan 19 12:00:01.149971 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 19 12:00:01.150692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 19 12:00:01.150767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 19 12:00:01.192884 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 19 12:00:01.218483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 19 12:00:01.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.218569 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 12:00:01.252788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 19 12:00:01.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.252876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 19 12:00:01.317687 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 19 12:00:01.317778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 19 12:00:01.337638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 12:00:01.481873 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 19 12:00:01.499965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 12:00:01.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.543820 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 19 12:00:01.544011 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 19 12:00:01.591614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 19 12:00:01.591819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 12:00:01.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.603817 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 19 12:00:01.603902 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 19 12:00:01.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.642664 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 19 12:00:01.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.642859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 19 12:00:01.701422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 19 12:00:01.701501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 12:00:01.783938 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 19 12:00:01.784872 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 19 12:00:01.784954 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 12:00:01.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.851531 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 19 12:00:01.851614 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 12:00:01.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:01.889925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 12:00:01.889989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:00:01.929008 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 19 12:00:01.997895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 19 12:00:01.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:02.022961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 19 12:00:02.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:02.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:02.023759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 19 12:00:02.033483 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 19 12:00:02.085862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 19 12:00:02.182424 systemd[1]: Switching root. Jan 19 12:00:02.251906 systemd-journald[320]: Journal stopped Jan 19 12:00:09.054517 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 19 12:00:09.054588 kernel: SELinux: policy capability network_peer_controls=1 Jan 19 12:00:09.054603 kernel: SELinux: policy capability open_perms=1 Jan 19 12:00:09.054617 kernel: SELinux: policy capability extended_socket_class=1 Jan 19 12:00:09.054629 kernel: SELinux: policy capability always_check_network=0 Jan 19 12:00:09.054640 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 19 12:00:09.054651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 19 12:00:09.054667 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 19 12:00:09.054677 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 19 12:00:09.054688 kernel: SELinux: policy capability userspace_initial_context=0 Jan 19 12:00:09.054702 systemd[1]: Successfully loaded SELinux policy in 266.034ms. Jan 19 12:00:09.054724 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 39.950ms. Jan 19 12:00:09.054737 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 12:00:09.054753 systemd[1]: Detected virtualization kvm. Jan 19 12:00:09.054770 systemd[1]: Detected architecture x86-64. Jan 19 12:00:09.054786 systemd[1]: Detected first boot. Jan 19 12:00:09.054797 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 12:00:09.054808 zram_generator::config[1149]: No configuration found. Jan 19 12:00:09.054828 kernel: Guest personality initialized and is inactive Jan 19 12:00:09.054839 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 19 12:00:09.054852 kernel: Initialized host personality Jan 19 12:00:09.054863 kernel: NET: Registered PF_VSOCK protocol family Jan 19 12:00:09.054874 systemd[1]: Populated /etc with preset unit settings. Jan 19 12:00:09.054885 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 19 12:00:09.054899 kernel: audit: type=1334 audit(1768824006.817:89): prog-id=12 op=LOAD Jan 19 12:00:09.054911 kernel: audit: type=1334 audit(1768824006.817:90): prog-id=3 op=UNLOAD Jan 19 12:00:09.054922 kernel: audit: type=1334 audit(1768824006.817:91): prog-id=13 op=LOAD Jan 19 12:00:09.054939 kernel: audit: type=1334 audit(1768824006.817:92): prog-id=14 op=LOAD Jan 19 12:00:09.054955 kernel: audit: type=1334 audit(1768824006.817:93): prog-id=4 op=UNLOAD Jan 19 12:00:09.054965 kernel: audit: type=1334 audit(1768824006.817:94): prog-id=5 op=UNLOAD Jan 19 12:00:09.054980 kernel: audit: type=1131 audit(1768824006.826:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.055000 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 19 12:00:09.055196 kernel: audit: type=1334 audit(1768824006.996:96): prog-id=12 op=UNLOAD Jan 19 12:00:09.055214 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 19 12:00:09.055227 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 19 12:00:09.055247 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 19 12:00:09.055259 kernel: audit: type=1130 audit(1768824007.069:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.055271 kernel: audit: type=1131 audit(1768824007.069:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.055283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 19 12:00:09.055294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 19 12:00:09.055306 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 19 12:00:09.055327 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 19 12:00:09.055343 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 19 12:00:09.055355 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 19 12:00:09.055367 systemd[1]: Created slice user.slice - User and Session Slice. Jan 19 12:00:09.055379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 12:00:09.055390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 12:00:09.055404 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 19 12:00:09.055416 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 19 12:00:09.055429 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 19 12:00:09.055440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 12:00:09.055452 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 19 12:00:09.055464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 12:00:09.055476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 12:00:09.055490 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 19 12:00:09.055501 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 19 12:00:09.055514 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 19 12:00:09.055525 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 19 12:00:09.055537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 12:00:09.055549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 12:00:09.055561 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 19 12:00:09.055574 systemd[1]: Reached target slices.target - Slice Units. Jan 19 12:00:09.055587 systemd[1]: Reached target swap.target - Swaps. Jan 19 12:00:09.055599 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 19 12:00:09.055610 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 19 12:00:09.055621 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 19 12:00:09.055633 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 12:00:09.055645 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 19 12:00:09.055659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 12:00:09.055670 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 19 12:00:09.055682 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 19 12:00:09.055693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 12:00:09.055705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 12:00:09.055717 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 19 12:00:09.055728 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 19 12:00:09.055742 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 19 12:00:09.055754 systemd[1]: Mounting media.mount - External Media Directory... Jan 19 12:00:09.055765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:00:09.055778 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 19 12:00:09.055800 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 19 12:00:09.055821 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 19 12:00:09.055844 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 19 12:00:09.055870 systemd[1]: Reached target machines.target - Containers. Jan 19 12:00:09.055893 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 19 12:00:09.055912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 12:00:09.055933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 12:00:09.055954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 19 12:00:09.055976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 12:00:09.055997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 12:00:09.056256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 12:00:09.056274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 19 12:00:09.056286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 12:00:09.056298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 19 12:00:09.056311 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 19 12:00:09.056323 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 19 12:00:09.056338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 19 12:00:09.056349 systemd[1]: Stopped systemd-fsck-usr.service. Jan 19 12:00:09.056361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 12:00:09.056373 kernel: fuse: init (API version 7.41) Jan 19 12:00:09.056387 kernel: ACPI: bus type drm_connector registered Jan 19 12:00:09.056398 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 12:00:09.056410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 12:00:09.056422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 12:00:09.056434 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 19 12:00:09.056474 systemd-journald[1235]: Collecting audit messages is enabled. Jan 19 12:00:09.056499 systemd-journald[1235]: Journal started Jan 19 12:00:09.056520 systemd-journald[1235]: Runtime Journal (/run/log/journal/c7cf1b80cc4246da8dba05b9d0a95232) is 6M, max 48M, 42M free. Jan 19 12:00:08.414000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 19 12:00:08.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:08.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:08.971000 audit: BPF prog-id=14 op=UNLOAD Jan 19 12:00:08.972000 audit: BPF prog-id=13 op=UNLOAD Jan 19 12:00:08.976000 audit: BPF prog-id=15 op=LOAD Jan 19 12:00:08.978000 audit: BPF prog-id=16 op=LOAD Jan 19 12:00:08.979000 audit: BPF prog-id=17 op=LOAD Jan 19 12:00:09.051000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 19 12:00:09.051000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd5d2c37e0 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:09.051000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 19 12:00:06.758841 systemd[1]: Queued start job for default target multi-user.target. Jan 19 12:00:06.819588 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 19 12:00:06.825001 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 19 12:00:06.826685 systemd[1]: systemd-journald.service: Consumed 5.118s CPU time. Jan 19 12:00:09.069177 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 19 12:00:09.083235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 12:00:09.089405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:00:09.111250 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 12:00:09.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.112709 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 19 12:00:09.118629 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 19 12:00:09.124629 systemd[1]: Mounted media.mount - External Media Directory. Jan 19 12:00:09.129976 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 19 12:00:09.136264 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 19 12:00:09.142185 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 19 12:00:09.148010 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 19 12:00:09.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.155884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 12:00:09.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.163963 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 19 12:00:09.164471 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 19 12:00:09.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.172236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 12:00:09.172543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 12:00:09.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.179805 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 12:00:09.180256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 12:00:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.186939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 12:00:09.187458 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 12:00:09.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.195658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 19 12:00:09.195962 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 19 12:00:09.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.202864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 12:00:09.203306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 12:00:09.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.210346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 12:00:09.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.217794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 12:00:09.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.226503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 19 12:00:09.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.234749 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 19 12:00:09.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.243395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 12:00:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.264302 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 12:00:09.272542 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 19 12:00:09.281417 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 19 12:00:09.303812 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 19 12:00:09.310240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 19 12:00:09.310357 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 12:00:09.317486 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 19 12:00:09.325203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 12:00:09.325405 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 12:00:09.327771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 19 12:00:09.343892 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 19 12:00:09.350640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 12:00:09.352354 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 19 12:00:09.358825 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 12:00:09.369416 systemd-journald[1235]: Time spent on flushing to /var/log/journal/c7cf1b80cc4246da8dba05b9d0a95232 is 18.741ms for 1212 entries. Jan 19 12:00:09.369416 systemd-journald[1235]: System Journal (/var/log/journal/c7cf1b80cc4246da8dba05b9d0a95232) is 8M, max 163.5M, 155.5M free. Jan 19 12:00:09.408602 systemd-journald[1235]: Received client request to flush runtime journal. Jan 19 12:00:09.365267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 12:00:09.384514 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 19 12:00:09.398513 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 19 12:00:09.412898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 19 12:00:09.422646 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 19 12:00:09.432824 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 19 12:00:09.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.444623 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 19 12:00:09.449176 kernel: loop1: detected capacity change from 0 to 50784 Jan 19 12:00:09.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.459455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 12:00:09.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.480802 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 19 12:00:09.490822 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 19 12:00:09.508953 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 19 12:00:09.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.531000 audit: BPF prog-id=18 op=LOAD Jan 19 12:00:09.531000 audit: BPF prog-id=19 op=LOAD Jan 19 12:00:09.531000 audit: BPF prog-id=20 op=LOAD Jan 19 12:00:09.543234 kernel: loop2: detected capacity change from 0 to 229808 Jan 19 12:00:09.537271 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 19 12:00:09.547000 audit: BPF prog-id=21 op=LOAD Jan 19 12:00:09.549422 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 12:00:09.559342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 12:00:09.569000 audit: BPF prog-id=22 op=LOAD Jan 19 12:00:09.569000 audit: BPF prog-id=23 op=LOAD Jan 19 12:00:09.569000 audit: BPF prog-id=24 op=LOAD Jan 19 12:00:09.572288 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 19 12:00:09.587000 audit: BPF prog-id=25 op=LOAD Jan 19 12:00:09.588000 audit: BPF prog-id=26 op=LOAD Jan 19 12:00:09.588000 audit: BPF prog-id=27 op=LOAD Jan 19 12:00:09.589430 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 19 12:00:09.602786 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 19 12:00:09.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.624429 kernel: loop3: detected capacity change from 0 to 111560 Jan 19 12:00:09.635374 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Jan 19 12:00:09.635399 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Jan 19 12:00:09.645757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 12:00:09.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.655659 systemd-nsresourced[1289]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 19 12:00:09.658521 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 19 12:00:09.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.680287 kernel: loop4: detected capacity change from 0 to 50784 Jan 19 12:00:09.704895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 19 12:00:09.711647 kernel: loop5: detected capacity change from 0 to 229808 Jan 19 12:00:09.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:09.736388 kernel: loop6: detected capacity change from 0 to 111560 Jan 19 12:00:09.756361 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 19 12:00:09.761857 (sd-merge)[1303]: Merged extensions into '/usr'. Jan 19 12:00:09.767965 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 19 12:00:09.767981 systemd[1]: Reloading... Jan 19 12:00:09.795720 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 19 12:00:09.830353 systemd-resolved[1287]: Positive Trust Anchors: Jan 19 12:00:09.830658 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 12:00:09.830706 systemd-resolved[1287]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 12:00:09.830768 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 12:00:09.836427 systemd-resolved[1287]: Defaulting to hostname 'linux'. Jan 19 12:00:09.862239 zram_generator::config[1343]: No configuration found. Jan 19 12:00:10.108794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 19 12:00:10.109205 systemd[1]: Reloading finished in 340 ms. Jan 19 12:00:10.153232 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 19 12:00:10.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:10.161854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 12:00:10.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:10.169493 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 19 12:00:10.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:10.178442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 19 12:00:10.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:10.198618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 12:00:10.230458 systemd[1]: Starting ensure-sysext.service... Jan 19 12:00:10.237613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 12:00:10.244000 audit: BPF prog-id=8 op=UNLOAD Jan 19 12:00:10.244000 audit: BPF prog-id=7 op=UNLOAD Jan 19 12:00:10.245000 audit: BPF prog-id=28 op=LOAD Jan 19 12:00:10.245000 audit: BPF prog-id=29 op=LOAD Jan 19 12:00:10.256524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 12:00:10.266000 audit: BPF prog-id=30 op=LOAD Jan 19 12:00:10.266000 audit: BPF prog-id=18 op=UNLOAD Jan 19 12:00:10.266000 audit: BPF prog-id=31 op=LOAD Jan 19 12:00:10.267000 audit: BPF prog-id=32 op=LOAD Jan 19 12:00:10.267000 audit: BPF prog-id=19 op=UNLOAD Jan 19 12:00:10.267000 audit: BPF prog-id=20 op=UNLOAD Jan 19 12:00:10.268000 audit: BPF prog-id=33 op=LOAD Jan 19 12:00:10.268000 audit: BPF prog-id=25 op=UNLOAD Jan 19 12:00:10.268000 audit: BPF prog-id=34 op=LOAD Jan 19 12:00:10.268000 audit: BPF prog-id=35 op=LOAD Jan 19 12:00:10.268000 audit: BPF prog-id=26 op=UNLOAD Jan 19 12:00:10.268000 audit: BPF prog-id=27 op=UNLOAD Jan 19 12:00:10.270000 audit: BPF prog-id=36 op=LOAD Jan 19 12:00:10.270000 audit: BPF prog-id=21 op=UNLOAD Jan 19 12:00:10.272000 audit: BPF prog-id=37 op=LOAD Jan 19 12:00:10.272000 audit: BPF prog-id=15 op=UNLOAD Jan 19 12:00:10.272000 audit: BPF prog-id=38 op=LOAD Jan 19 12:00:10.272000 audit: BPF prog-id=39 op=LOAD Jan 19 12:00:10.272000 audit: BPF prog-id=16 op=UNLOAD Jan 19 12:00:10.272000 audit: BPF prog-id=17 op=UNLOAD Jan 19 12:00:10.274000 audit: BPF prog-id=40 op=LOAD Jan 19 12:00:10.274000 audit: BPF prog-id=22 op=UNLOAD Jan 19 12:00:10.274000 audit: BPF prog-id=41 op=LOAD Jan 19 12:00:10.274000 audit: BPF prog-id=42 op=LOAD Jan 19 12:00:10.274000 audit: BPF prog-id=23 op=UNLOAD Jan 19 12:00:10.274000 audit: BPF prog-id=24 op=UNLOAD Jan 19 12:00:10.282228 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 19 12:00:10.282300 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 19 12:00:10.282584 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 19 12:00:10.284503 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 19 12:00:10.284807 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 19 12:00:10.286272 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Jan 19 12:00:10.286335 systemd[1]: Reloading... Jan 19 12:00:10.294610 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 12:00:10.294684 systemd-tmpfiles[1378]: Skipping /boot Jan 19 12:00:10.301589 systemd-udevd[1379]: Using default interface naming scheme 'v257'. Jan 19 12:00:10.313420 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 12:00:10.313436 systemd-tmpfiles[1378]: Skipping /boot Jan 19 12:00:10.374694 zram_generator::config[1409]: No configuration found. Jan 19 12:00:10.536382 kernel: mousedev: PS/2 mouse device common for all mice Jan 19 12:00:10.580258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 19 12:00:10.611280 kernel: ACPI: button: Power Button [PWRF] Jan 19 12:00:10.642319 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 19 12:00:10.650319 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 19 12:00:10.660299 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 19 12:00:10.733786 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 19 12:00:10.734011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 12:00:10.744671 systemd[1]: Reloading finished in 457 ms. Jan 19 12:00:10.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:10.786351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 12:00:10.799000 audit: BPF prog-id=43 op=LOAD Jan 19 12:00:10.799000 audit: BPF prog-id=44 op=LOAD Jan 19 12:00:10.801000 audit: BPF prog-id=28 op=UNLOAD Jan 19 12:00:10.801000 audit: BPF prog-id=29 op=UNLOAD Jan 19 12:00:10.804000 audit: BPF prog-id=45 op=LOAD Jan 19 12:00:10.804000 audit: BPF prog-id=36 op=UNLOAD Jan 19 12:00:10.813000 audit: BPF prog-id=46 op=LOAD Jan 19 12:00:10.813000 audit: BPF prog-id=30 op=UNLOAD Jan 19 12:00:10.815000 audit: BPF prog-id=47 op=LOAD Jan 19 12:00:10.817000 audit: BPF prog-id=48 op=LOAD Jan 19 12:00:10.817000 audit: BPF prog-id=31 op=UNLOAD Jan 19 12:00:10.817000 audit: BPF prog-id=32 op=UNLOAD Jan 19 12:00:10.821000 audit: BPF prog-id=49 op=LOAD Jan 19 12:00:10.824000 audit: BPF prog-id=37 op=UNLOAD Jan 19 12:00:10.824000 audit: BPF prog-id=50 op=LOAD Jan 19 12:00:10.824000 audit: BPF prog-id=51 op=LOAD Jan 19 12:00:10.824000 audit: BPF prog-id=38 op=UNLOAD Jan 19 12:00:10.824000 audit: BPF prog-id=39 op=UNLOAD Jan 19 12:00:10.827000 audit: BPF prog-id=52 op=LOAD Jan 19 12:00:10.827000 audit: BPF prog-id=33 op=UNLOAD Jan 19 12:00:10.828000 audit: BPF prog-id=53 op=LOAD Jan 19 12:00:10.829000 audit: BPF prog-id=54 op=LOAD Jan 19 12:00:10.830000 audit: BPF prog-id=34 op=UNLOAD Jan 19 12:00:10.830000 audit: BPF prog-id=35 op=UNLOAD Jan 19 12:00:10.832000 audit: BPF prog-id=55 op=LOAD Jan 19 12:00:10.834000 audit: BPF prog-id=40 op=UNLOAD Jan 19 12:00:10.835000 audit: BPF prog-id=56 op=LOAD Jan 19 12:00:10.838000 audit: BPF prog-id=57 op=LOAD Jan 19 12:00:10.838000 audit: BPF prog-id=41 op=UNLOAD Jan 19 12:00:10.838000 audit: BPF prog-id=42 op=UNLOAD Jan 19 12:00:10.905699 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 12:00:10.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.035791 kernel: kvm_amd: TSC scaling supported Jan 19 12:00:11.035890 kernel: kvm_amd: Nested Virtualization enabled Jan 19 12:00:11.035906 kernel: kvm_amd: Nested Paging enabled Jan 19 12:00:11.041774 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 19 12:00:11.041840 kernel: kvm_amd: PMU virtualization is disabled Jan 19 12:00:11.041905 systemd[1]: Finished ensure-sysext.service. Jan 19 12:00:11.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.157006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:00:11.159435 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 12:00:11.165733 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 19 12:00:11.171855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 12:00:11.173425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 12:00:11.187361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 12:00:11.197294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 12:00:11.205788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 12:00:11.212825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 12:00:11.212948 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 12:00:11.214508 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 19 12:00:11.230162 kernel: EDAC MC: Ver: 3.0.0 Jan 19 12:00:11.227431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 19 12:00:11.240940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 12:00:11.243390 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 19 12:00:11.256000 audit: BPF prog-id=58 op=LOAD Jan 19 12:00:11.258301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 12:00:11.268000 audit: BPF prog-id=59 op=LOAD Jan 19 12:00:11.271326 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 19 12:00:11.283476 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 19 12:00:11.301783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 12:00:11.312000 audit[1521]: SYSTEM_BOOT pid=1521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.307490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 12:00:11.309406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 12:00:11.314387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 12:00:11.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.322752 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 12:00:11.323316 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 12:00:11.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.330665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 12:00:11.331880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 12:00:11.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.342469 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 12:00:11.342698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 12:00:11.343424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 19 12:00:11.346292 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 19 12:00:11.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.361493 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 12:00:11.361667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 12:00:11.373983 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 19 12:00:11.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:11.389000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 19 12:00:11.389000 audit[1536]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1a3f7860 a2=420 a3=0 items=0 ppid=1493 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:11.389000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:00:11.389484 augenrules[1536]: No rules Jan 19 12:00:11.392457 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 12:00:11.393823 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 12:00:11.399334 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 19 12:00:11.400401 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 19 12:00:11.458474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 12:00:11.473861 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 19 12:00:11.482832 systemd[1]: Reached target time-set.target - System Time Set. Jan 19 12:00:11.482888 systemd-networkd[1513]: lo: Link UP Jan 19 12:00:11.482895 systemd-networkd[1513]: lo: Gained carrier Jan 19 12:00:11.486164 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:00:11.486208 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 12:00:11.487700 systemd-networkd[1513]: eth0: Link UP Jan 19 12:00:11.488985 systemd-networkd[1513]: eth0: Gained carrier Jan 19 12:00:11.489224 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 12:00:11.491458 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 12:00:11.500507 systemd[1]: Reached target network.target - Network. Jan 19 12:00:11.507754 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 19 12:00:11.515608 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 19 12:00:11.528474 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 19 12:00:11.529487 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 19 12:00:11.530694 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 19 12:00:11.530793 systemd-timesyncd[1518]: Initial clock synchronization to Mon 2026-01-19 12:00:11.619742 UTC. Jan 19 12:00:11.548006 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 19 12:00:11.992824 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 19 12:00:12.000928 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 19 12:00:12.012979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 19 12:00:12.050443 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 19 12:00:12.057329 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 12:00:12.063617 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 19 12:00:12.070438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 19 12:00:12.077223 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 19 12:00:12.084485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 19 12:00:12.090729 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 19 12:00:12.098986 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 19 12:00:12.106483 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 19 12:00:12.113259 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 19 12:00:12.120503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 19 12:00:12.120608 systemd[1]: Reached target paths.target - Path Units. Jan 19 12:00:12.126271 systemd[1]: Reached target timers.target - Timer Units. Jan 19 12:00:12.133375 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 19 12:00:12.141852 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 19 12:00:12.150777 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 19 12:00:12.159298 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 19 12:00:12.167790 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 19 12:00:12.184526 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 19 12:00:12.192329 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 19 12:00:12.202584 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 19 12:00:12.212379 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 12:00:12.219879 systemd[1]: Reached target basic.target - Basic System. Jan 19 12:00:12.227461 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 19 12:00:12.227538 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 19 12:00:12.229433 systemd[1]: Starting containerd.service - containerd container runtime... Jan 19 12:00:12.239633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 19 12:00:12.248832 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 19 12:00:12.258731 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 19 12:00:12.268496 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 19 12:00:12.273784 jq[1563]: false Jan 19 12:00:12.276612 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 19 12:00:12.278729 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 19 12:00:12.302275 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 19 12:00:12.311586 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 19 12:00:12.311596 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 19 12:00:12.314253 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 19 12:00:12.317993 extend-filesystems[1564]: Found /dev/vda6 Jan 19 12:00:12.323640 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 19 12:00:12.330232 extend-filesystems[1564]: Found /dev/vda9 Jan 19 12:00:12.345164 extend-filesystems[1564]: Checking size of /dev/vda9 Jan 19 12:00:12.351299 oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 19 12:00:12.351486 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 19 12:00:12.351486 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 12:00:12.351486 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 19 12:00:12.351319 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 12:00:12.351575 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 19 12:00:12.351367 oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 19 12:00:12.352984 extend-filesystems[1564]: Resized partition /dev/vda9 Jan 19 12:00:12.361511 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Jan 19 12:00:12.373383 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 19 12:00:12.373383 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 12:00:12.373258 oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 19 12:00:12.373272 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 12:00:12.375398 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 19 12:00:12.379746 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 19 12:00:12.387407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 19 12:00:12.388348 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 19 12:00:12.389812 systemd[1]: Starting update-engine.service - Update Engine... Jan 19 12:00:12.398292 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 19 12:00:12.410178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 19 12:00:12.417191 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 19 12:00:12.418856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 19 12:00:12.419825 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 19 12:00:12.421605 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 19 12:00:12.432781 jq[1583]: true Jan 19 12:00:12.433131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 19 12:00:12.433496 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 19 12:00:12.437131 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 19 12:00:12.461499 jq[1594]: true Jan 19 12:00:12.463660 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 19 12:00:12.463660 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 19 12:00:12.463660 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 19 12:00:12.484169 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Jan 19 12:00:12.478685 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 19 12:00:12.489241 update_engine[1581]: I20260119 12:00:12.485644 1581 main.cc:92] Flatcar Update Engine starting Jan 19 12:00:12.479255 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 19 12:00:12.496012 systemd[1]: motdgen.service: Deactivated successfully. Jan 19 12:00:12.496546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 19 12:00:12.518223 tar[1592]: linux-amd64/LICENSE Jan 19 12:00:12.518223 tar[1592]: linux-amd64/helm Jan 19 12:00:12.569692 dbus-daemon[1561]: [system] SELinux support is enabled Jan 19 12:00:12.570213 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 19 12:00:12.582512 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 19 12:00:12.582594 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 19 12:00:12.589194 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 19 12:00:12.589267 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 19 12:00:12.596861 update_engine[1581]: I20260119 12:00:12.596287 1581 update_check_scheduler.cc:74] Next update check in 6m32s Jan 19 12:00:12.596394 systemd[1]: Started update-engine.service - Update Engine. Jan 19 12:00:12.597653 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Jan 19 12:00:12.597687 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 19 12:00:12.598400 systemd-logind[1580]: New seat seat0. Jan 19 12:00:12.602733 systemd[1]: Started systemd-logind.service - User Login Management. Jan 19 12:00:12.611583 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 19 12:00:12.619647 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Jan 19 12:00:12.622215 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 19 12:00:12.630834 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 19 12:00:12.671656 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 19 12:00:12.696962 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 19 12:00:12.709858 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 19 12:00:12.718319 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 19 12:00:12.755936 systemd-networkd[1513]: eth0: Gained IPv6LL Jan 19 12:00:12.760477 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 19 12:00:12.771784 systemd[1]: Reached target network-online.target - Network is Online. Jan 19 12:00:12.780873 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 19 12:00:12.788394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:12.796998 containerd[1598]: time="2026-01-19T12:00:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 19 12:00:12.799452 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 19 12:00:12.800787 containerd[1598]: time="2026-01-19T12:00:12.799993049Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 19 12:00:12.805821 systemd[1]: issuegen.service: Deactivated successfully. Jan 19 12:00:12.810391 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 19 12:00:12.826355 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 19 12:00:12.831465 containerd[1598]: time="2026-01-19T12:00:12.831329794Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.135µs" Jan 19 12:00:12.831465 containerd[1598]: time="2026-01-19T12:00:12.831422036Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 19 12:00:12.831465 containerd[1598]: time="2026-01-19T12:00:12.831459817Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 19 12:00:12.831535 containerd[1598]: time="2026-01-19T12:00:12.831476973Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 19 12:00:12.831766 containerd[1598]: time="2026-01-19T12:00:12.831632625Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 19 12:00:12.831797 containerd[1598]: time="2026-01-19T12:00:12.831772612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 12:00:12.831960 containerd[1598]: time="2026-01-19T12:00:12.831849560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 12:00:12.831960 containerd[1598]: time="2026-01-19T12:00:12.831919951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832371 containerd[1598]: time="2026-01-19T12:00:12.832260864Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832371 containerd[1598]: time="2026-01-19T12:00:12.832333864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832371 containerd[1598]: time="2026-01-19T12:00:12.832347092Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832371 containerd[1598]: time="2026-01-19T12:00:12.832358758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832644 containerd[1598]: time="2026-01-19T12:00:12.832525572Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832644 containerd[1598]: time="2026-01-19T12:00:12.832601595Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 19 12:00:12.832966 containerd[1598]: time="2026-01-19T12:00:12.832702018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.837847 containerd[1598]: time="2026-01-19T12:00:12.837687607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.837896 containerd[1598]: time="2026-01-19T12:00:12.837871458Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 12:00:12.837918 containerd[1598]: time="2026-01-19T12:00:12.837898558Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 19 12:00:12.838270 containerd[1598]: time="2026-01-19T12:00:12.837934252Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 19 12:00:12.838708 containerd[1598]: time="2026-01-19T12:00:12.838687037Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 19 12:00:12.838986 containerd[1598]: time="2026-01-19T12:00:12.838959080Z" level=info msg="metadata content store policy set" policy=shared Jan 19 12:00:12.853231 containerd[1598]: time="2026-01-19T12:00:12.852944752Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 19 12:00:12.853294 containerd[1598]: time="2026-01-19T12:00:12.853234153Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 12:00:12.853412 containerd[1598]: time="2026-01-19T12:00:12.853334415Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 12:00:12.853440 containerd[1598]: time="2026-01-19T12:00:12.853415383Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 19 12:00:12.853459 containerd[1598]: time="2026-01-19T12:00:12.853438938Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 19 12:00:12.853459 containerd[1598]: time="2026-01-19T12:00:12.853454443Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 19 12:00:12.853503 containerd[1598]: time="2026-01-19T12:00:12.853468920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 19 12:00:12.853503 containerd[1598]: time="2026-01-19T12:00:12.853481322Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 19 12:00:12.853503 containerd[1598]: time="2026-01-19T12:00:12.853495709Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 19 12:00:12.853554 containerd[1598]: time="2026-01-19T12:00:12.853512553Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 19 12:00:12.853554 containerd[1598]: time="2026-01-19T12:00:12.853526567Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 19 12:00:12.853554 containerd[1598]: time="2026-01-19T12:00:12.853540378Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 19 12:00:12.853603 containerd[1598]: time="2026-01-19T12:00:12.853555580Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 19 12:00:12.853603 containerd[1598]: time="2026-01-19T12:00:12.853573484Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 19 12:00:12.853818 containerd[1598]: time="2026-01-19T12:00:12.853726979Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 19 12:00:12.853912 containerd[1598]: time="2026-01-19T12:00:12.853813912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 19 12:00:12.853934 containerd[1598]: time="2026-01-19T12:00:12.853906720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 19 12:00:12.853934 containerd[1598]: time="2026-01-19T12:00:12.853924289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 19 12:00:12.853979 containerd[1598]: time="2026-01-19T12:00:12.853937437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 19 12:00:12.853979 containerd[1598]: time="2026-01-19T12:00:12.853949194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 19 12:00:12.853979 containerd[1598]: time="2026-01-19T12:00:12.853965232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 19 12:00:12.854124 containerd[1598]: time="2026-01-19T12:00:12.853981372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 19 12:00:12.854124 containerd[1598]: time="2026-01-19T12:00:12.853996927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 19 12:00:12.854124 containerd[1598]: time="2026-01-19T12:00:12.854010518Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 19 12:00:12.854276 containerd[1598]: time="2026-01-19T12:00:12.854211152Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 19 12:00:12.854354 containerd[1598]: time="2026-01-19T12:00:12.854294448Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 19 12:00:12.854471 containerd[1598]: time="2026-01-19T12:00:12.854409056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 19 12:00:12.854493 containerd[1598]: time="2026-01-19T12:00:12.854478248Z" level=info msg="Start snapshots syncer" Jan 19 12:00:12.854569 containerd[1598]: time="2026-01-19T12:00:12.854514537Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 19 12:00:12.855156 containerd[1598]: time="2026-01-19T12:00:12.854921105Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 19 12:00:12.857430 containerd[1598]: time="2026-01-19T12:00:12.857405751Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 19 12:00:12.857535 containerd[1598]: time="2026-01-19T12:00:12.857520138Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 19 12:00:12.857682 containerd[1598]: time="2026-01-19T12:00:12.857665857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 19 12:00:12.857745 containerd[1598]: time="2026-01-19T12:00:12.857732359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 19 12:00:12.857808 containerd[1598]: time="2026-01-19T12:00:12.857793057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 19 12:00:12.857852 containerd[1598]: time="2026-01-19T12:00:12.857841839Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 19 12:00:12.858127 containerd[1598]: time="2026-01-19T12:00:12.857978822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 19 12:00:12.858127 containerd[1598]: time="2026-01-19T12:00:12.857997279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 19 12:00:12.858127 containerd[1598]: time="2026-01-19T12:00:12.858007484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 19 12:00:12.858211 containerd[1598]: time="2026-01-19T12:00:12.858195566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 19 12:00:12.858255 containerd[1598]: time="2026-01-19T12:00:12.858244972Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 19 12:00:12.858335 containerd[1598]: time="2026-01-19T12:00:12.858321316Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 12:00:12.858384 containerd[1598]: time="2026-01-19T12:00:12.858371629Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 12:00:12.858421 containerd[1598]: time="2026-01-19T12:00:12.858411625Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 12:00:12.858463 containerd[1598]: time="2026-01-19T12:00:12.858452931Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858490610Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858504170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858514456Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858532035Z" level=info msg="runtime interface created" Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858537255Z" level=info msg="created NRI interface" Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858545414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858557746Z" level=info msg="Connect containerd service" Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.858574934Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 19 12:00:12.859605 containerd[1598]: time="2026-01-19T12:00:12.859366238Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 19 12:00:12.861733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 19 12:00:12.876201 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 19 12:00:12.883280 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 19 12:00:12.889491 systemd[1]: Reached target getty.target - Login Prompts. Jan 19 12:00:12.895532 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 19 12:00:12.895877 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 19 12:00:12.903234 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 19 12:00:12.908576 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 19 12:00:12.988110 containerd[1598]: time="2026-01-19T12:00:12.987742864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 19 12:00:12.988110 containerd[1598]: time="2026-01-19T12:00:12.987867910Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 19 12:00:12.988110 containerd[1598]: time="2026-01-19T12:00:12.987744464Z" level=info msg="Start subscribing containerd event" Jan 19 12:00:12.988110 containerd[1598]: time="2026-01-19T12:00:12.987990052Z" level=info msg="Start recovering state" Jan 19 12:00:12.988712 containerd[1598]: time="2026-01-19T12:00:12.988491108Z" level=info msg="Start event monitor" Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.988761953Z" level=info msg="Start cni network conf syncer for default" Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.988928465Z" level=info msg="Start streaming server" Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.988939376Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.989122038Z" level=info msg="runtime interface starting up..." Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.989128969Z" level=info msg="starting plugins..." Jan 19 12:00:12.989155 containerd[1598]: time="2026-01-19T12:00:12.989144877Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 19 12:00:12.989311 containerd[1598]: time="2026-01-19T12:00:12.989284711Z" level=info msg="containerd successfully booted in 0.192790s" Jan 19 12:00:12.989567 systemd[1]: Started containerd.service - containerd container runtime. Jan 19 12:00:13.028627 tar[1592]: linux-amd64/README.md Jan 19 12:00:13.059733 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 19 12:00:13.990645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:14.000817 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 19 12:00:14.001439 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:00:14.009971 systemd[1]: Startup finished in 10.362s (kernel) + 21.170s (initrd) + 11.600s (userspace) = 43.133s. Jan 19 12:00:14.742910 kubelet[1702]: E0119 12:00:14.742468 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:00:14.746806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:00:14.747238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:00:14.747855 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 269.5M memory peak. Jan 19 12:00:21.727995 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 19 12:00:21.730659 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:36136.service - OpenSSH per-connection server daemon (10.0.0.1:36136). Jan 19 12:00:21.883945 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 36136 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:21.888797 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:21.903567 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 19 12:00:21.905638 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 19 12:00:21.915604 systemd-logind[1580]: New session 1 of user core. Jan 19 12:00:21.947598 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 19 12:00:21.951159 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 19 12:00:21.974452 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:21.980502 systemd-logind[1580]: New session 2 of user core. Jan 19 12:00:22.145953 systemd[1722]: Queued start job for default target default.target. Jan 19 12:00:22.170679 systemd[1722]: Created slice app.slice - User Application Slice. Jan 19 12:00:22.170788 systemd[1722]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 19 12:00:22.170810 systemd[1722]: Reached target paths.target - Paths. Jan 19 12:00:22.171156 systemd[1722]: Reached target timers.target - Timers. Jan 19 12:00:22.173788 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 19 12:00:22.175494 systemd[1722]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 19 12:00:22.198255 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 19 12:00:22.198792 systemd[1722]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 19 12:00:22.199158 systemd[1722]: Reached target sockets.target - Sockets. Jan 19 12:00:22.199363 systemd[1722]: Reached target basic.target - Basic System. Jan 19 12:00:22.199549 systemd[1722]: Reached target default.target - Main User Target. Jan 19 12:00:22.199668 systemd[1722]: Startup finished in 206ms. Jan 19 12:00:22.199688 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 19 12:00:22.215811 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 19 12:00:22.242281 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:35210.service - OpenSSH per-connection server daemon (10.0.0.1:35210). Jan 19 12:00:22.325858 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 35210 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:22.328001 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:22.336782 systemd-logind[1580]: New session 3 of user core. Jan 19 12:00:22.347331 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 19 12:00:22.368491 sshd[1740]: Connection closed by 10.0.0.1 port 35210 Jan 19 12:00:22.368952 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 19 12:00:22.391399 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:35210.service: Deactivated successfully. Jan 19 12:00:22.393727 systemd[1]: session-3.scope: Deactivated successfully. Jan 19 12:00:22.395483 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 19 12:00:22.399434 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:35218.service - OpenSSH per-connection server daemon (10.0.0.1:35218). Jan 19 12:00:22.400897 systemd-logind[1580]: Removed session 3. Jan 19 12:00:22.494868 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 35218 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:22.496849 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:22.505367 systemd-logind[1580]: New session 4 of user core. Jan 19 12:00:22.519390 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 19 12:00:22.536410 sshd[1751]: Connection closed by 10.0.0.1 port 35218 Jan 19 12:00:22.536838 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jan 19 12:00:22.549666 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:35218.service: Deactivated successfully. Jan 19 12:00:22.552934 systemd[1]: session-4.scope: Deactivated successfully. Jan 19 12:00:22.554905 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 19 12:00:22.560238 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:35234.service - OpenSSH per-connection server daemon (10.0.0.1:35234). Jan 19 12:00:22.561209 systemd-logind[1580]: Removed session 4. Jan 19 12:00:22.642552 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 35234 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:22.644924 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:22.654234 systemd-logind[1580]: New session 5 of user core. Jan 19 12:00:22.668556 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 19 12:00:22.695318 sshd[1762]: Connection closed by 10.0.0.1 port 35234 Jan 19 12:00:22.695913 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 19 12:00:22.709453 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:35234.service: Deactivated successfully. Jan 19 12:00:22.712680 systemd[1]: session-5.scope: Deactivated successfully. Jan 19 12:00:22.714738 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 19 12:00:22.720337 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:35242.service - OpenSSH per-connection server daemon (10.0.0.1:35242). Jan 19 12:00:22.721757 systemd-logind[1580]: Removed session 5. Jan 19 12:00:22.802178 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 35242 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:22.804530 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:22.814719 systemd-logind[1580]: New session 6 of user core. Jan 19 12:00:22.830398 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 19 12:00:22.865306 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 19 12:00:22.865782 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:00:22.889812 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 19 12:00:22.892407 sshd[1772]: Connection closed by 10.0.0.1 port 35242 Jan 19 12:00:22.892900 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 19 12:00:22.908863 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:35242.service: Deactivated successfully. Jan 19 12:00:22.911711 systemd[1]: session-6.scope: Deactivated successfully. Jan 19 12:00:22.913359 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 19 12:00:22.918396 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:35244.service - OpenSSH per-connection server daemon (10.0.0.1:35244). Jan 19 12:00:22.919751 systemd-logind[1580]: Removed session 6. Jan 19 12:00:23.000115 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 35244 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:23.002602 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:23.011752 systemd-logind[1580]: New session 7 of user core. Jan 19 12:00:23.031364 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 19 12:00:23.061404 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 19 12:00:23.062212 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:00:23.069373 sudo[1786]: pam_unix(sudo:session): session closed for user root Jan 19 12:00:23.084650 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 19 12:00:23.085227 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:00:23.098799 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 12:00:23.179000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 12:00:23.181359 augenrules[1810]: No rules Jan 19 12:00:23.182742 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 12:00:23.183562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 12:00:23.185519 sudo[1785]: pam_unix(sudo:session): session closed for user root Jan 19 12:00:23.186319 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 19 12:00:23.186385 kernel: audit: type=1305 audit(1768824023.179:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 12:00:23.187994 sshd[1784]: Connection closed by 10.0.0.1 port 35244 Jan 19 12:00:23.190968 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Jan 19 12:00:23.179000 audit[1810]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3275e570 a2=420 a3=0 items=0 ppid=1791 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:23.219200 kernel: audit: type=1300 audit(1768824023.179:228): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe3275e570 a2=420 a3=0 items=0 ppid=1791 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:23.219268 kernel: audit: type=1327 audit(1768824023.179:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:00:23.179000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 12:00:23.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.241242 kernel: audit: type=1130 audit(1768824023.183:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.241347 kernel: audit: type=1131 audit(1768824023.183:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.184000 audit[1785]: USER_END pid=1785 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.268534 kernel: audit: type=1106 audit(1768824023.184:231): pid=1785 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.268636 kernel: audit: type=1104 audit(1768824023.184:232): pid=1785 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.184000 audit[1785]: CRED_DISP pid=1785 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.192000 audit[1780]: USER_END pid=1780 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.305472 kernel: audit: type=1106 audit(1768824023.192:233): pid=1780 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.305560 kernel: audit: type=1104 audit(1768824023.192:234): pid=1780 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.192000 audit[1780]: CRED_DISP pid=1780 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.333967 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:35244.service: Deactivated successfully. Jan 19 12:00:23.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.26:22-10.0.0.1:35244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.337439 systemd[1]: session-7.scope: Deactivated successfully. Jan 19 12:00:23.339245 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 19 12:00:23.342588 systemd-logind[1580]: Removed session 7. Jan 19 12:00:23.344673 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:35256.service - OpenSSH per-connection server daemon (10.0.0.1:35256). Jan 19 12:00:23.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.26:22-10.0.0.1:35256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.356179 kernel: audit: type=1131 audit(1768824023.333:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.26:22-10.0.0.1:35244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.420000 audit[1819]: USER_ACCT pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.422442 sshd[1819]: Accepted publickey for core from 10.0.0.1 port 35256 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:00:23.422000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.422000 audit[1819]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd74fb5800 a2=3 a3=0 items=0 ppid=1 pid=1819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:23.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:00:23.424891 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:00:23.434499 systemd-logind[1580]: New session 8 of user core. Jan 19 12:00:23.448384 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 19 12:00:23.452000 audit[1819]: USER_START pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.455000 audit[1823]: CRED_ACQ pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:00:23.473000 audit[1824]: USER_ACCT pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.475380 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 19 12:00:23.474000 audit[1824]: CRED_REFR pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:23.475910 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 12:00:23.474000 audit[1824]: USER_START pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:00:24.040183 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 19 12:00:24.066974 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 19 12:00:24.477285 dockerd[1845]: time="2026-01-19T12:00:24.476823836Z" level=info msg="Starting up" Jan 19 12:00:24.479527 dockerd[1845]: time="2026-01-19T12:00:24.479425558Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 19 12:00:24.508251 dockerd[1845]: time="2026-01-19T12:00:24.508157531Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 19 12:00:24.628243 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport507635976-merged.mount: Deactivated successfully. Jan 19 12:00:24.732628 dockerd[1845]: time="2026-01-19T12:00:24.732387524Z" level=info msg="Loading containers: start." Jan 19 12:00:24.750214 kernel: Initializing XFRM netlink socket Jan 19 12:00:24.895000 audit[1898]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.895000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffddfce68e0 a2=0 a3=0 items=0 ppid=1845 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 12:00:24.903000 audit[1900]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.903000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe4b62c6e0 a2=0 a3=0 items=0 ppid=1845 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.903000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 12:00:24.910000 audit[1902]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.910000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea971f3a0 a2=0 a3=0 items=0 ppid=1845 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 12:00:24.918000 audit[1904]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.918000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1307af00 a2=0 a3=0 items=0 ppid=1845 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 12:00:24.925000 audit[1906]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.925000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea83ace70 a2=0 a3=0 items=0 ppid=1845 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 12:00:24.931000 audit[1908]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.931000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe55f968a0 a2=0 a3=0 items=0 ppid=1845 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:00:24.940000 audit[1910]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.940000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb95a1160 a2=0 a3=0 items=0 ppid=1845 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:00:24.991184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 19 12:00:24.994710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:24.947000 audit[1912]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:24.947000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffeafa7bd80 a2=0 a3=0 items=0 ppid=1845 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:24.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 12:00:25.006000 audit[1916]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.006000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffeb153f240 a2=0 a3=0 items=0 ppid=1845 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 19 12:00:25.013000 audit[1918]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.013000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd9c5f7c30 a2=0 a3=0 items=0 ppid=1845 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 12:00:25.021000 audit[1922]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.021000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc9ecf3d20 a2=0 a3=0 items=0 ppid=1845 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.021000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 12:00:25.027000 audit[1924]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.027000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc06254510 a2=0 a3=0 items=0 ppid=1845 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:00:25.033000 audit[1926]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.033000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd0704b210 a2=0 a3=0 items=0 ppid=1845 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 12:00:25.152000 audit[1956]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.152000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe7636fe80 a2=0 a3=0 items=0 ppid=1845 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 12:00:25.158000 audit[1958]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.158000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd9c6343e0 a2=0 a3=0 items=0 ppid=1845 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 12:00:25.165000 audit[1960]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.165000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff02d6a390 a2=0 a3=0 items=0 ppid=1845 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 12:00:25.172000 audit[1962]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.172000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcb760de0 a2=0 a3=0 items=0 ppid=1845 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 12:00:25.180000 audit[1964]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.180000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd904b0a40 a2=0 a3=0 items=0 ppid=1845 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 12:00:25.187000 audit[1966]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.187000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffedb49fb00 a2=0 a3=0 items=0 ppid=1845 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:00:25.193000 audit[1968]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.193000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd5200f00 a2=0 a3=0 items=0 ppid=1845 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:00:25.201000 audit[1972]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.201000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdb4459e30 a2=0 a3=0 items=0 ppid=1845 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 12:00:25.210000 audit[1974]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.210000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcb8e5ac40 a2=0 a3=0 items=0 ppid=1845 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 19 12:00:25.218000 audit[1978]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.218000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd19e013d0 a2=0 a3=0 items=0 ppid=1845 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 12:00:25.228000 audit[1980]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.228000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff915ddc40 a2=0 a3=0 items=0 ppid=1845 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 12:00:25.234776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:25.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:25.236000 audit[1982]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.236000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffff09a21a0 a2=0 a3=0 items=0 ppid=1845 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 12:00:25.242000 audit[1986]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.242000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd7101cf40 a2=0 a3=0 items=0 ppid=1845 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 12:00:25.250557 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:00:25.263000 audit[1996]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.263000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe41589660 a2=0 a3=0 items=0 ppid=1845 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 12:00:25.272000 audit[1998]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.272000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcbca6b330 a2=0 a3=0 items=0 ppid=1845 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 12:00:25.281000 audit[2001]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.281000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4c874850 a2=0 a3=0 items=0 ppid=1845 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 12:00:25.291000 audit[2003]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.291000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe493dce00 a2=0 a3=0 items=0 ppid=1845 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 12:00:25.298000 audit[2005]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.298000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff132dbca0 a2=0 a3=0 items=0 ppid=1845 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 12:00:25.308000 audit[2007]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:25.308000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8800d8e0 a2=0 a3=0 items=0 ppid=1845 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 12:00:25.385720 kubelet[1984]: E0119 12:00:25.385416 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:00:25.395286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:00:25.395602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:00:25.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:00:25.397847 systemd[1]: kubelet.service: Consumed 326ms CPU time, 109.3M memory peak. Jan 19 12:00:25.397000 audit[2013]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.397000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc99e3d4f0 a2=0 a3=0 items=0 ppid=1845 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.397000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 19 12:00:25.404000 audit[2016]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.404000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff9c91eeb0 a2=0 a3=0 items=0 ppid=1845 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 19 12:00:25.434000 audit[2024]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.434000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffec7065e90 a2=0 a3=0 items=0 ppid=1845 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 19 12:00:25.461000 audit[2030]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.461000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdfe703b00 a2=0 a3=0 items=0 ppid=1845 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 19 12:00:25.469000 audit[2032]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.469000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc0bcfc9e0 a2=0 a3=0 items=0 ppid=1845 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.469000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 19 12:00:25.477000 audit[2034]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.477000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9bfa1610 a2=0 a3=0 items=0 ppid=1845 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 19 12:00:25.485000 audit[2036]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.485000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff05119380 a2=0 a3=0 items=0 ppid=1845 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 12:00:25.492000 audit[2038]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:25.492000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe3a01bf90 a2=0 a3=0 items=0 ppid=1845 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:25.492000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 19 12:00:25.495913 systemd-networkd[1513]: docker0: Link UP Jan 19 12:00:25.503760 dockerd[1845]: time="2026-01-19T12:00:25.503594796Z" level=info msg="Loading containers: done." Jan 19 12:00:25.542132 dockerd[1845]: time="2026-01-19T12:00:25.541919001Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 19 12:00:25.542359 dockerd[1845]: time="2026-01-19T12:00:25.542233151Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 19 12:00:25.542578 dockerd[1845]: time="2026-01-19T12:00:25.542317109Z" level=info msg="Initializing buildkit" Jan 19 12:00:25.605667 dockerd[1845]: time="2026-01-19T12:00:25.605552065Z" level=info msg="Completed buildkit initialization" Jan 19 12:00:25.614898 dockerd[1845]: time="2026-01-19T12:00:25.614813198Z" level=info msg="Daemon has completed initialization" Jan 19 12:00:25.615311 dockerd[1845]: time="2026-01-19T12:00:25.615013300Z" level=info msg="API listen on /run/docker.sock" Jan 19 12:00:25.616298 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 19 12:00:25.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:25.620651 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3589189031-merged.mount: Deactivated successfully. Jan 19 12:00:26.677852 containerd[1598]: time="2026-01-19T12:00:26.677398146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 19 12:00:27.538894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368661213.mount: Deactivated successfully. Jan 19 12:00:29.139813 containerd[1598]: time="2026-01-19T12:00:29.139662867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:29.141811 containerd[1598]: time="2026-01-19T12:00:29.141659133Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 19 12:00:29.145129 containerd[1598]: time="2026-01-19T12:00:29.144898285Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:29.152302 containerd[1598]: time="2026-01-19T12:00:29.151916817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:29.152938 containerd[1598]: time="2026-01-19T12:00:29.152638464Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.475136s" Jan 19 12:00:29.152938 containerd[1598]: time="2026-01-19T12:00:29.152794103Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 19 12:00:29.155147 containerd[1598]: time="2026-01-19T12:00:29.154915845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 19 12:00:31.087698 containerd[1598]: time="2026-01-19T12:00:31.087580197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:31.089178 containerd[1598]: time="2026-01-19T12:00:31.088838553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 19 12:00:31.090582 containerd[1598]: time="2026-01-19T12:00:31.090391669Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:31.094339 containerd[1598]: time="2026-01-19T12:00:31.094182524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:31.095593 containerd[1598]: time="2026-01-19T12:00:31.095447265Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.940428397s" Jan 19 12:00:31.095593 containerd[1598]: time="2026-01-19T12:00:31.095539617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 19 12:00:31.096912 containerd[1598]: time="2026-01-19T12:00:31.096499795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 19 12:00:32.926672 containerd[1598]: time="2026-01-19T12:00:32.926274122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:32.928703 containerd[1598]: time="2026-01-19T12:00:32.928415201Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 19 12:00:32.931271 containerd[1598]: time="2026-01-19T12:00:32.930877939Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:32.934817 containerd[1598]: time="2026-01-19T12:00:32.934620557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:32.935781 containerd[1598]: time="2026-01-19T12:00:32.935594102Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.839008458s" Jan 19 12:00:32.935781 containerd[1598]: time="2026-01-19T12:00:32.935699921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 19 12:00:32.936390 containerd[1598]: time="2026-01-19T12:00:32.936363906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 19 12:00:34.208193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911608020.mount: Deactivated successfully. Jan 19 12:00:35.103901 containerd[1598]: time="2026-01-19T12:00:35.103645268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:35.105696 containerd[1598]: time="2026-01-19T12:00:35.105618235Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 19 12:00:35.108787 containerd[1598]: time="2026-01-19T12:00:35.108635193Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:35.111699 containerd[1598]: time="2026-01-19T12:00:35.111553105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:35.111858 containerd[1598]: time="2026-01-19T12:00:35.111781609Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.175170677s" Jan 19 12:00:35.111858 containerd[1598]: time="2026-01-19T12:00:35.111804681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 19 12:00:35.113717 containerd[1598]: time="2026-01-19T12:00:35.113384535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 19 12:00:35.490348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 19 12:00:35.493231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:35.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:35.806267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:35.827701 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 19 12:00:35.827795 kernel: audit: type=1130 audit(1768824035.805:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:35.848732 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 12:00:35.850610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856919844.mount: Deactivated successfully. Jan 19 12:00:35.967968 kubelet[2174]: E0119 12:00:35.967932 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 12:00:35.972651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 12:00:35.972839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 12:00:35.973848 systemd[1]: kubelet.service: Consumed 342ms CPU time, 108.5M memory peak. Jan 19 12:00:35.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:00:35.991360 kernel: audit: type=1131 audit(1768824035.972:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:00:37.367009 containerd[1598]: time="2026-01-19T12:00:37.366721710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:37.369555 containerd[1598]: time="2026-01-19T12:00:37.369432566Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20930825" Jan 19 12:00:37.373180 containerd[1598]: time="2026-01-19T12:00:37.372860214Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:37.378586 containerd[1598]: time="2026-01-19T12:00:37.378375291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:37.380221 containerd[1598]: time="2026-01-19T12:00:37.379858211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.266446644s" Jan 19 12:00:37.380221 containerd[1598]: time="2026-01-19T12:00:37.379886114Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 19 12:00:37.380824 containerd[1598]: time="2026-01-19T12:00:37.380744196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 19 12:00:37.830514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445650038.mount: Deactivated successfully. Jan 19 12:00:37.844361 containerd[1598]: time="2026-01-19T12:00:37.844242775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:00:37.847227 containerd[1598]: time="2026-01-19T12:00:37.846816530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 19 12:00:37.849352 containerd[1598]: time="2026-01-19T12:00:37.849173810Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:00:37.853974 containerd[1598]: time="2026-01-19T12:00:37.853796290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 12:00:37.854711 containerd[1598]: time="2026-01-19T12:00:37.854536237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.697972ms" Jan 19 12:00:37.854711 containerd[1598]: time="2026-01-19T12:00:37.854640146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 19 12:00:37.855985 containerd[1598]: time="2026-01-19T12:00:37.855934916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 19 12:00:38.570619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116329481.mount: Deactivated successfully. Jan 19 12:00:42.058952 containerd[1598]: time="2026-01-19T12:00:42.058677331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:42.061651 containerd[1598]: time="2026-01-19T12:00:42.061623664Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46231913" Jan 19 12:00:42.065682 containerd[1598]: time="2026-01-19T12:00:42.065413782Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:42.071370 containerd[1598]: time="2026-01-19T12:00:42.071233494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:00:42.072132 containerd[1598]: time="2026-01-19T12:00:42.071935561Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.215971969s" Jan 19 12:00:42.072353 containerd[1598]: time="2026-01-19T12:00:42.072207185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 19 12:00:45.524943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:45.525470 systemd[1]: kubelet.service: Consumed 342ms CPU time, 108.5M memory peak. Jan 19 12:00:45.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:45.528879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:45.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:45.545325 kernel: audit: type=1130 audit(1768824045.524:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:45.545398 kernel: audit: type=1131 audit(1768824045.524:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:45.571793 systemd[1]: Reload requested from client PID 2320 ('systemctl') (unit session-8.scope)... Jan 19 12:00:45.571880 systemd[1]: Reloading... Jan 19 12:00:45.687416 zram_generator::config[2365]: No configuration found. Jan 19 12:00:45.988176 systemd[1]: Reloading finished in 415 ms. Jan 19 12:00:46.030000 audit: BPF prog-id=63 op=LOAD Jan 19 12:00:46.030000 audit: BPF prog-id=49 op=UNLOAD Jan 19 12:00:46.030000 audit: BPF prog-id=64 op=LOAD Jan 19 12:00:46.030000 audit: BPF prog-id=65 op=LOAD Jan 19 12:00:46.030000 audit: BPF prog-id=50 op=UNLOAD Jan 19 12:00:46.030000 audit: BPF prog-id=51 op=UNLOAD Jan 19 12:00:46.033000 audit: BPF prog-id=66 op=LOAD Jan 19 12:00:46.033000 audit: BPF prog-id=46 op=UNLOAD Jan 19 12:00:46.033000 audit: BPF prog-id=67 op=LOAD Jan 19 12:00:46.033000 audit: BPF prog-id=68 op=LOAD Jan 19 12:00:46.033000 audit: BPF prog-id=47 op=UNLOAD Jan 19 12:00:46.033000 audit: BPF prog-id=48 op=UNLOAD Jan 19 12:00:46.034000 audit: BPF prog-id=69 op=LOAD Jan 19 12:00:46.034000 audit: BPF prog-id=70 op=LOAD Jan 19 12:00:46.034000 audit: BPF prog-id=43 op=UNLOAD Jan 19 12:00:46.034000 audit: BPF prog-id=44 op=UNLOAD Jan 19 12:00:46.036000 audit: BPF prog-id=71 op=LOAD Jan 19 12:00:46.036000 audit: BPF prog-id=59 op=UNLOAD Jan 19 12:00:46.038196 kernel: audit: type=1334 audit(1768824046.030:292): prog-id=63 op=LOAD Jan 19 12:00:46.038236 kernel: audit: type=1334 audit(1768824046.030:293): prog-id=49 op=UNLOAD Jan 19 12:00:46.038269 kernel: audit: type=1334 audit(1768824046.030:294): prog-id=64 op=LOAD Jan 19 12:00:46.038301 kernel: audit: type=1334 audit(1768824046.030:295): prog-id=65 op=LOAD Jan 19 12:00:46.038327 kernel: audit: type=1334 audit(1768824046.030:296): prog-id=50 op=UNLOAD Jan 19 12:00:46.038359 kernel: audit: type=1334 audit(1768824046.030:297): prog-id=51 op=UNLOAD Jan 19 12:00:46.038473 kernel: audit: type=1334 audit(1768824046.033:298): prog-id=66 op=LOAD Jan 19 12:00:46.038506 kernel: audit: type=1334 audit(1768824046.033:299): prog-id=46 op=UNLOAD Jan 19 12:00:46.038000 audit: BPF prog-id=72 op=LOAD Jan 19 12:00:46.038000 audit: BPF prog-id=58 op=UNLOAD Jan 19 12:00:46.039000 audit: BPF prog-id=73 op=LOAD Jan 19 12:00:46.039000 audit: BPF prog-id=55 op=UNLOAD Jan 19 12:00:46.039000 audit: BPF prog-id=74 op=LOAD Jan 19 12:00:46.039000 audit: BPF prog-id=75 op=LOAD Jan 19 12:00:46.039000 audit: BPF prog-id=56 op=UNLOAD Jan 19 12:00:46.039000 audit: BPF prog-id=57 op=UNLOAD Jan 19 12:00:46.040000 audit: BPF prog-id=76 op=LOAD Jan 19 12:00:46.040000 audit: BPF prog-id=45 op=UNLOAD Jan 19 12:00:46.044000 audit: BPF prog-id=77 op=LOAD Jan 19 12:00:46.044000 audit: BPF prog-id=60 op=UNLOAD Jan 19 12:00:46.044000 audit: BPF prog-id=78 op=LOAD Jan 19 12:00:46.044000 audit: BPF prog-id=79 op=LOAD Jan 19 12:00:46.044000 audit: BPF prog-id=61 op=UNLOAD Jan 19 12:00:46.044000 audit: BPF prog-id=62 op=UNLOAD Jan 19 12:00:46.045000 audit: BPF prog-id=80 op=LOAD Jan 19 12:00:46.046000 audit: BPF prog-id=52 op=UNLOAD Jan 19 12:00:46.046000 audit: BPF prog-id=81 op=LOAD Jan 19 12:00:46.046000 audit: BPF prog-id=82 op=LOAD Jan 19 12:00:46.046000 audit: BPF prog-id=53 op=UNLOAD Jan 19 12:00:46.046000 audit: BPF prog-id=54 op=UNLOAD Jan 19 12:00:46.093262 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 19 12:00:46.093528 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 19 12:00:46.094229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:46.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 12:00:46.096699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:46.397209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:46.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:46.412414 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 12:00:46.557569 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:00:46.557569 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 12:00:46.557569 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:00:46.558221 kubelet[2411]: I0119 12:00:46.557566 2411 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 12:00:47.604803 kubelet[2411]: I0119 12:00:47.604669 2411 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 19 12:00:47.604803 kubelet[2411]: I0119 12:00:47.604781 2411 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 12:00:47.605746 kubelet[2411]: I0119 12:00:47.605196 2411 server.go:956] "Client rotation is on, will bootstrap in background" Jan 19 12:00:47.652322 kubelet[2411]: E0119 12:00:47.652178 2411 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 19 12:00:47.653477 kubelet[2411]: I0119 12:00:47.653328 2411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 12:00:47.670938 kubelet[2411]: I0119 12:00:47.670800 2411 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 12:00:47.680744 kubelet[2411]: I0119 12:00:47.680626 2411 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 19 12:00:47.681527 kubelet[2411]: I0119 12:00:47.680980 2411 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 12:00:47.681527 kubelet[2411]: I0119 12:00:47.681251 2411 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 12:00:47.681527 kubelet[2411]: I0119 12:00:47.681489 2411 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 12:00:47.681527 kubelet[2411]: I0119 12:00:47.681499 2411 container_manager_linux.go:303] "Creating device plugin manager" Jan 19 12:00:47.681934 kubelet[2411]: I0119 12:00:47.681626 2411 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:00:47.686751 kubelet[2411]: I0119 12:00:47.686563 2411 kubelet.go:480] "Attempting to sync node with API server" Jan 19 12:00:47.686751 kubelet[2411]: I0119 12:00:47.686647 2411 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 12:00:47.686751 kubelet[2411]: I0119 12:00:47.686668 2411 kubelet.go:386] "Adding apiserver pod source" Jan 19 12:00:47.686751 kubelet[2411]: I0119 12:00:47.686682 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 12:00:47.694341 kubelet[2411]: E0119 12:00:47.694239 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 19 12:00:47.694689 kubelet[2411]: E0119 12:00:47.694571 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 19 12:00:47.695518 kubelet[2411]: I0119 12:00:47.695346 2411 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 12:00:47.696605 kubelet[2411]: I0119 12:00:47.696354 2411 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 19 12:00:47.698557 kubelet[2411]: W0119 12:00:47.698377 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 19 12:00:47.706196 kubelet[2411]: I0119 12:00:47.705556 2411 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 19 12:00:47.706196 kubelet[2411]: I0119 12:00:47.705603 2411 server.go:1289] "Started kubelet" Jan 19 12:00:47.710804 kubelet[2411]: I0119 12:00:47.710788 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 12:00:47.712222 kubelet[2411]: I0119 12:00:47.712194 2411 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 12:00:47.714259 kubelet[2411]: I0119 12:00:47.714245 2411 server.go:317] "Adding debug handlers to kubelet server" Jan 19 12:00:47.715646 kubelet[2411]: I0119 12:00:47.714868 2411 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 19 12:00:47.715646 kubelet[2411]: I0119 12:00:47.715267 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 12:00:47.717331 kubelet[2411]: E0119 12:00:47.714191 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c201989a8cf5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-19 12:00:47.705575263 +0000 UTC m=+1.280107690,LastTimestamp:2026-01-19 12:00:47.705575263 +0000 UTC m=+1.280107690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 19 12:00:47.717633 kubelet[2411]: E0119 12:00:47.717368 2411 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 19 12:00:47.718956 kubelet[2411]: E0119 12:00:47.718763 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 19 12:00:47.720263 kubelet[2411]: I0119 12:00:47.719810 2411 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 19 12:00:47.720263 kubelet[2411]: I0119 12:00:47.719968 2411 reconciler.go:26] "Reconciler: start to sync state" Jan 19 12:00:47.720398 kubelet[2411]: I0119 12:00:47.720350 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 12:00:47.721221 kubelet[2411]: I0119 12:00:47.720752 2411 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 12:00:47.721751 kubelet[2411]: I0119 12:00:47.721522 2411 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 12:00:47.722357 kubelet[2411]: E0119 12:00:47.722216 2411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 19 12:00:47.724730 kubelet[2411]: E0119 12:00:47.724298 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Jan 19 12:00:47.724982 kubelet[2411]: I0119 12:00:47.724815 2411 factory.go:223] Registration of the containerd container factory successfully Jan 19 12:00:47.724982 kubelet[2411]: I0119 12:00:47.724918 2411 factory.go:223] Registration of the systemd container factory successfully Jan 19 12:00:47.745000 audit[2431]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.745000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1f229f30 a2=0 a3=0 items=0 ppid=2411 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 12:00:47.752000 audit[2432]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.752000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd3fbc680 a2=0 a3=0 items=0 ppid=2411 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 12:00:47.763000 audit[2435]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.763000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff552537f0 a2=0 a3=0 items=0 ppid=2411 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:00:47.772000 audit[2438]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.772000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc98ed93b0 a2=0 a3=0 items=0 ppid=2411 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:00:47.774705 kubelet[2411]: I0119 12:00:47.774304 2411 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 12:00:47.774705 kubelet[2411]: I0119 12:00:47.774325 2411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 12:00:47.774705 kubelet[2411]: I0119 12:00:47.774341 2411 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:00:47.793556 kubelet[2411]: I0119 12:00:47.793279 2411 policy_none.go:49] "None policy: Start" Jan 19 12:00:47.793556 kubelet[2411]: I0119 12:00:47.793398 2411 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 19 12:00:47.793556 kubelet[2411]: I0119 12:00:47.793418 2411 state_mem.go:35] "Initializing new in-memory state store" Jan 19 12:00:47.814371 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 19 12:00:47.814000 audit[2442]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.814000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff13d18b30 a2=0 a3=0 items=0 ppid=2411 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 19 12:00:47.816999 kubelet[2411]: I0119 12:00:47.816821 2411 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 19 12:00:47.820000 audit[2444]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.820000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb6d5cb80 a2=0 a3=0 items=0 ppid=2411 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 12:00:47.822672 kubelet[2411]: E0119 12:00:47.822393 2411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 19 12:00:47.825000 audit[2443]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:47.825000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd617e66c0 a2=0 a3=0 items=0 ppid=2411 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 12:00:47.827175 kubelet[2411]: I0119 12:00:47.826549 2411 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 19 12:00:47.827175 kubelet[2411]: I0119 12:00:47.826587 2411 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 19 12:00:47.827175 kubelet[2411]: I0119 12:00:47.826606 2411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 12:00:47.827175 kubelet[2411]: I0119 12:00:47.826614 2411 kubelet.go:2436] "Starting kubelet main sync loop" Jan 19 12:00:47.827175 kubelet[2411]: E0119 12:00:47.826653 2411 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 12:00:47.831460 kubelet[2411]: E0119 12:00:47.831271 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 19 12:00:47.832000 audit[2445]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.832000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff49c53920 a2=0 a3=0 items=0 ppid=2411 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 12:00:47.833000 audit[2446]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:47.833000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9dde9f40 a2=0 a3=0 items=0 ppid=2411 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 12:00:47.837593 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 19 12:00:47.839000 audit[2447]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:00:47.839000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef0442120 a2=0 a3=0 items=0 ppid=2411 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 12:00:47.840000 audit[2448]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:47.840000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc110cb7e0 a2=0 a3=0 items=0 ppid=2411 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 12:00:47.844966 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 19 12:00:47.845000 audit[2449]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:00:47.845000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2b464060 a2=0 a3=0 items=0 ppid=2411 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:47.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 12:00:47.861321 kubelet[2411]: E0119 12:00:47.860814 2411 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 19 12:00:47.864661 kubelet[2411]: I0119 12:00:47.864455 2411 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 12:00:47.864730 kubelet[2411]: I0119 12:00:47.864695 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 12:00:47.865481 kubelet[2411]: I0119 12:00:47.865223 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 12:00:47.868275 kubelet[2411]: E0119 12:00:47.867702 2411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 12:00:47.868275 kubelet[2411]: E0119 12:00:47.868271 2411 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 19 12:00:47.927461 kubelet[2411]: E0119 12:00:47.927311 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Jan 19 12:00:47.955627 systemd[1]: Created slice kubepods-burstable-podbce0eeeaf069ff8ea3056a00eb2f072d.slice - libcontainer container kubepods-burstable-podbce0eeeaf069ff8ea3056a00eb2f072d.slice. Jan 19 12:00:47.969428 kubelet[2411]: I0119 12:00:47.969188 2411 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:00:47.969930 kubelet[2411]: E0119 12:00:47.969840 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 19 12:00:47.985993 kubelet[2411]: E0119 12:00:47.985794 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:47.986967 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 19 12:00:47.992826 kubelet[2411]: E0119 12:00:47.992659 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:47.995501 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 19 12:00:47.999165 kubelet[2411]: E0119 12:00:47.998989 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:48.024466 kubelet[2411]: I0119 12:00:48.024329 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:48.024466 kubelet[2411]: I0119 12:00:48.024365 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:48.024466 kubelet[2411]: I0119 12:00:48.024391 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:48.024466 kubelet[2411]: I0119 12:00:48.024410 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:48.024466 kubelet[2411]: I0119 12:00:48.024424 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:48.025102 kubelet[2411]: I0119 12:00:48.024901 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:48.025102 kubelet[2411]: I0119 12:00:48.025004 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:48.025175 kubelet[2411]: I0119 12:00:48.025159 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:48.025228 kubelet[2411]: I0119 12:00:48.025175 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:48.174487 kubelet[2411]: I0119 12:00:48.173480 2411 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:00:48.174487 kubelet[2411]: E0119 12:00:48.173964 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 19 12:00:48.288401 kubelet[2411]: E0119 12:00:48.288252 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.289924 containerd[1598]: time="2026-01-19T12:00:48.289828771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bce0eeeaf069ff8ea3056a00eb2f072d,Namespace:kube-system,Attempt:0,}" Jan 19 12:00:48.294265 kubelet[2411]: E0119 12:00:48.294198 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.295375 containerd[1598]: time="2026-01-19T12:00:48.295323166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 19 12:00:48.300999 kubelet[2411]: E0119 12:00:48.300861 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.302128 containerd[1598]: time="2026-01-19T12:00:48.301725370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 19 12:00:48.328281 kubelet[2411]: E0119 12:00:48.328246 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Jan 19 12:00:48.356459 containerd[1598]: time="2026-01-19T12:00:48.356152153Z" level=info msg="connecting to shim 77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c" address="unix:///run/containerd/s/dd060c7cc2f1a8321a9d44c1b4f319a1b14e23991aa81f96363e76c39cd16fad" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:00:48.388176 containerd[1598]: time="2026-01-19T12:00:48.387970093Z" level=info msg="connecting to shim cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b" address="unix:///run/containerd/s/7966d6d612c479438d20c2181f18f9ee2e3f1752b9e5e629ab84b27444d4da07" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:00:48.405556 containerd[1598]: time="2026-01-19T12:00:48.404955256Z" level=info msg="connecting to shim 41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3" address="unix:///run/containerd/s/4353562124e93b1cdda941a0daf090c8efeb26fbe2336a88477bcdeb667d3aa8" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:00:48.447391 systemd[1]: Started cri-containerd-77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c.scope - libcontainer container 77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c. Jan 19 12:00:48.467450 systemd[1]: Started cri-containerd-cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b.scope - libcontainer container cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b. Jan 19 12:00:48.484415 systemd[1]: Started cri-containerd-41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3.scope - libcontainer container 41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3. Jan 19 12:00:48.490000 audit: BPF prog-id=83 op=LOAD Jan 19 12:00:48.491000 audit: BPF prog-id=84 op=LOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=84 op=UNLOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=85 op=LOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=86 op=LOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=86 op=UNLOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=85 op=UNLOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.491000 audit: BPF prog-id=87 op=LOAD Jan 19 12:00:48.491000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2458 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737663265623738643131363564663936663334663739333930366634 Jan 19 12:00:48.499000 audit: BPF prog-id=88 op=LOAD Jan 19 12:00:48.499000 audit: BPF prog-id=89 op=LOAD Jan 19 12:00:48.499000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.499000 audit: BPF prog-id=89 op=UNLOAD Jan 19 12:00:48.499000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.499000 audit: BPF prog-id=90 op=LOAD Jan 19 12:00:48.499000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.500000 audit: BPF prog-id=91 op=LOAD Jan 19 12:00:48.500000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.500000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.501000 audit: BPF prog-id=91 op=UNLOAD Jan 19 12:00:48.501000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.502000 audit: BPF prog-id=90 op=UNLOAD Jan 19 12:00:48.502000 audit[2511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.502000 audit: BPF prog-id=92 op=LOAD Jan 19 12:00:48.502000 audit[2511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2476 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393335366664626235643230373564616336633537643162633766 Jan 19 12:00:48.515000 audit: BPF prog-id=93 op=LOAD Jan 19 12:00:48.520000 audit: BPF prog-id=94 op=LOAD Jan 19 12:00:48.520000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.520000 audit: BPF prog-id=94 op=UNLOAD Jan 19 12:00:48.520000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.520000 audit: BPF prog-id=95 op=LOAD Jan 19 12:00:48.520000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.520000 audit: BPF prog-id=96 op=LOAD Jan 19 12:00:48.520000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.520000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.522000 audit: BPF prog-id=96 op=UNLOAD Jan 19 12:00:48.522000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.522000 audit: BPF prog-id=95 op=UNLOAD Jan 19 12:00:48.522000 audit[2527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.522000 audit: BPF prog-id=97 op=LOAD Jan 19 12:00:48.522000 audit[2527]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2495 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666339613830333864636134333435313734383432386330343964 Jan 19 12:00:48.580220 kubelet[2411]: I0119 12:00:48.579624 2411 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:00:48.581453 kubelet[2411]: E0119 12:00:48.581431 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 19 12:00:48.590229 containerd[1598]: time="2026-01-19T12:00:48.590202201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bce0eeeaf069ff8ea3056a00eb2f072d,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c\"" Jan 19 12:00:48.595586 kubelet[2411]: E0119 12:00:48.595293 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.598649 containerd[1598]: time="2026-01-19T12:00:48.598496854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b\"" Jan 19 12:00:48.600564 kubelet[2411]: E0119 12:00:48.600423 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.604758 containerd[1598]: time="2026-01-19T12:00:48.604738105Z" level=info msg="CreateContainer within sandbox \"77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 19 12:00:48.614141 containerd[1598]: time="2026-01-19T12:00:48.613569194Z" level=info msg="CreateContainer within sandbox \"cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 19 12:00:48.625823 containerd[1598]: time="2026-01-19T12:00:48.625722892Z" level=info msg="Container 89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:00:48.637010 containerd[1598]: time="2026-01-19T12:00:48.636598074Z" level=info msg="Container ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:00:48.649719 containerd[1598]: time="2026-01-19T12:00:48.649575371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3\"" Jan 19 12:00:48.653331 containerd[1598]: time="2026-01-19T12:00:48.653311190Z" level=info msg="CreateContainer within sandbox \"77f2eb78d1165df96f34f793906f4f9292efcae012e3cacf8d0970a3c2f0af8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c\"" Jan 19 12:00:48.654741 kubelet[2411]: E0119 12:00:48.654375 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:48.655737 containerd[1598]: time="2026-01-19T12:00:48.655717313Z" level=info msg="StartContainer for \"89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c\"" Jan 19 12:00:48.658655 containerd[1598]: time="2026-01-19T12:00:48.658631190Z" level=info msg="connecting to shim 89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c" address="unix:///run/containerd/s/dd060c7cc2f1a8321a9d44c1b4f319a1b14e23991aa81f96363e76c39cd16fad" protocol=ttrpc version=3 Jan 19 12:00:48.664480 containerd[1598]: time="2026-01-19T12:00:48.664273876Z" level=info msg="CreateContainer within sandbox \"cd9356fdbb5d2075dac6c57d1bc7f795402d285219f3a88dd5edbdcc9263ad7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e\"" Jan 19 12:00:48.664839 containerd[1598]: time="2026-01-19T12:00:48.664711671Z" level=info msg="StartContainer for \"ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e\"" Jan 19 12:00:48.667322 containerd[1598]: time="2026-01-19T12:00:48.665817589Z" level=info msg="connecting to shim ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e" address="unix:///run/containerd/s/7966d6d612c479438d20c2181f18f9ee2e3f1752b9e5e629ab84b27444d4da07" protocol=ttrpc version=3 Jan 19 12:00:48.667322 containerd[1598]: time="2026-01-19T12:00:48.666331712Z" level=info msg="CreateContainer within sandbox \"41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 19 12:00:48.690190 containerd[1598]: time="2026-01-19T12:00:48.689940755Z" level=info msg="Container 0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:00:48.705368 systemd[1]: Started cri-containerd-89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c.scope - libcontainer container 89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c. Jan 19 12:00:48.710862 containerd[1598]: time="2026-01-19T12:00:48.710838410Z" level=info msg="CreateContainer within sandbox \"41fc9a8038dca43451748428c049dafdd02945fc89abbbe6745bfff44878bbc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc\"" Jan 19 12:00:48.713958 containerd[1598]: time="2026-01-19T12:00:48.713537737Z" level=info msg="StartContainer for \"0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc\"" Jan 19 12:00:48.714880 containerd[1598]: time="2026-01-19T12:00:48.714509370Z" level=info msg="connecting to shim 0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc" address="unix:///run/containerd/s/4353562124e93b1cdda941a0daf090c8efeb26fbe2336a88477bcdeb667d3aa8" protocol=ttrpc version=3 Jan 19 12:00:48.722573 systemd[1]: Started cri-containerd-ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e.scope - libcontainer container ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e. Jan 19 12:00:48.729864 kubelet[2411]: E0119 12:00:48.729807 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 19 12:00:48.748000 audit: BPF prog-id=98 op=LOAD Jan 19 12:00:48.749000 audit: BPF prog-id=99 op=LOAD Jan 19 12:00:48.749000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.749000 audit: BPF prog-id=99 op=UNLOAD Jan 19 12:00:48.749000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.749000 audit: BPF prog-id=100 op=LOAD Jan 19 12:00:48.749000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.749000 audit: BPF prog-id=101 op=LOAD Jan 19 12:00:48.749000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.750000 audit: BPF prog-id=101 op=UNLOAD Jan 19 12:00:48.750000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.750000 audit: BPF prog-id=100 op=UNLOAD Jan 19 12:00:48.750000 audit[2585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.750000 audit: BPF prog-id=102 op=LOAD Jan 19 12:00:48.750000 audit[2585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2476 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564373532393738316439636261396137356330356163363765653135 Jan 19 12:00:48.755000 audit: BPF prog-id=103 op=LOAD Jan 19 12:00:48.756000 audit: BPF prog-id=104 op=LOAD Jan 19 12:00:48.756000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.758000 audit: BPF prog-id=104 op=UNLOAD Jan 19 12:00:48.758000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.759000 audit: BPF prog-id=105 op=LOAD Jan 19 12:00:48.759000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.760000 audit: BPF prog-id=106 op=LOAD Jan 19 12:00:48.760000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.760000 audit: BPF prog-id=106 op=UNLOAD Jan 19 12:00:48.760000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.761000 audit: BPF prog-id=105 op=UNLOAD Jan 19 12:00:48.761000 audit[2584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.761000 audit: BPF prog-id=107 op=LOAD Jan 19 12:00:48.761000 audit[2584]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2458 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839666238653132313932656333316266613430383962636133356636 Jan 19 12:00:48.781446 systemd[1]: Started cri-containerd-0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc.scope - libcontainer container 0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc. Jan 19 12:00:48.808000 audit: BPF prog-id=108 op=LOAD Jan 19 12:00:48.809000 audit: BPF prog-id=109 op=LOAD Jan 19 12:00:48.809000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152238 a2=98 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.809000 audit: BPF prog-id=109 op=UNLOAD Jan 19 12:00:48.809000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.814000 audit: BPF prog-id=110 op=LOAD Jan 19 12:00:48.814000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152488 a2=98 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.814000 audit: BPF prog-id=111 op=LOAD Jan 19 12:00:48.814000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000152218 a2=98 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.814000 audit: BPF prog-id=111 op=UNLOAD Jan 19 12:00:48.814000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.814000 audit: BPF prog-id=110 op=UNLOAD Jan 19 12:00:48.814000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.814000 audit: BPF prog-id=112 op=LOAD Jan 19 12:00:48.814000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001526e8 a2=98 a3=0 items=0 ppid=2495 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:00:48.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062346132323933326363343132643731336630636266623039316563 Jan 19 12:00:48.877567 containerd[1598]: time="2026-01-19T12:00:48.877473673Z" level=info msg="StartContainer for \"ed7529781d9cba9a75c05ac67ee1599ca4e66da478af7a7f2a0194d5a381cf8e\" returns successfully" Jan 19 12:00:48.883195 kubelet[2411]: E0119 12:00:48.882711 2411 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 19 12:00:48.897511 containerd[1598]: time="2026-01-19T12:00:48.897362034Z" level=info msg="StartContainer for \"89fb8e12192ec31bfa4089bca35f6e4f2efd2aa4cd9934a783cc79569b11de0c\" returns successfully" Jan 19 12:00:48.943289 containerd[1598]: time="2026-01-19T12:00:48.942967815Z" level=info msg="StartContainer for \"0b4a22932cc412d713f0cbfb091ec9a5430a950ea8b6baa3d028bf6ba40b41fc\" returns successfully" Jan 19 12:00:49.387155 kubelet[2411]: I0119 12:00:49.386668 2411 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:00:49.887526 kubelet[2411]: E0119 12:00:49.886901 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:49.887526 kubelet[2411]: E0119 12:00:49.887222 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:49.890392 kubelet[2411]: E0119 12:00:49.890214 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:49.890392 kubelet[2411]: E0119 12:00:49.890308 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:49.896729 kubelet[2411]: E0119 12:00:49.896332 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:49.896729 kubelet[2411]: E0119 12:00:49.896541 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:50.905279 kubelet[2411]: E0119 12:00:50.904856 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:50.905279 kubelet[2411]: E0119 12:00:50.904966 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:50.905862 kubelet[2411]: E0119 12:00:50.905847 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:50.906690 kubelet[2411]: E0119 12:00:50.906657 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:50.917612 kubelet[2411]: E0119 12:00:50.917343 2411 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 19 12:00:50.917911 kubelet[2411]: E0119 12:00:50.917752 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:51.136641 kubelet[2411]: E0119 12:00:51.136396 2411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 19 12:00:51.219916 kubelet[2411]: I0119 12:00:51.219474 2411 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 19 12:00:51.222741 kubelet[2411]: I0119 12:00:51.222682 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:51.250416 kubelet[2411]: E0119 12:00:51.249992 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:51.252900 kubelet[2411]: I0119 12:00:51.252863 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:51.261501 kubelet[2411]: E0119 12:00:51.261383 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:51.261501 kubelet[2411]: I0119 12:00:51.261406 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:51.266225 kubelet[2411]: E0119 12:00:51.265827 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:51.692792 kubelet[2411]: I0119 12:00:51.692526 2411 apiserver.go:52] "Watching apiserver" Jan 19 12:00:51.721557 kubelet[2411]: I0119 12:00:51.721325 2411 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 19 12:00:51.904660 kubelet[2411]: I0119 12:00:51.904415 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:51.904660 kubelet[2411]: I0119 12:00:51.904509 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:51.908901 kubelet[2411]: E0119 12:00:51.908665 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:51.909465 kubelet[2411]: E0119 12:00:51.908908 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:51.909465 kubelet[2411]: E0119 12:00:51.908917 2411 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:51.909465 kubelet[2411]: E0119 12:00:51.909263 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:52.909621 kubelet[2411]: I0119 12:00:52.907887 2411 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:52.920831 kubelet[2411]: E0119 12:00:52.920762 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:53.898231 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-8.scope)... Jan 19 12:00:53.898332 systemd[1]: Reloading... Jan 19 12:00:53.908807 kubelet[2411]: E0119 12:00:53.908664 2411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:54.066788 zram_generator::config[2735]: No configuration found. Jan 19 12:00:54.403324 systemd[1]: Reloading finished in 504 ms. Jan 19 12:00:54.482942 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:54.496787 systemd[1]: kubelet.service: Deactivated successfully. Jan 19 12:00:54.497695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:54.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:54.497947 systemd[1]: kubelet.service: Consumed 2.401s CPU time, 129.5M memory peak. Jan 19 12:00:54.504226 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 19 12:00:54.504297 kernel: audit: type=1131 audit(1768824054.496:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:54.502904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 12:00:54.504000 audit: BPF prog-id=113 op=LOAD Jan 19 12:00:54.533410 kernel: audit: type=1334 audit(1768824054.504:395): prog-id=113 op=LOAD Jan 19 12:00:54.533470 kernel: audit: type=1334 audit(1768824054.504:396): prog-id=80 op=UNLOAD Jan 19 12:00:54.504000 audit: BPF prog-id=80 op=UNLOAD Jan 19 12:00:54.504000 audit: BPF prog-id=114 op=LOAD Jan 19 12:00:54.547693 kernel: audit: type=1334 audit(1768824054.504:397): prog-id=114 op=LOAD Jan 19 12:00:54.547768 kernel: audit: type=1334 audit(1768824054.504:398): prog-id=115 op=LOAD Jan 19 12:00:54.504000 audit: BPF prog-id=115 op=LOAD Jan 19 12:00:54.504000 audit: BPF prog-id=81 op=UNLOAD Jan 19 12:00:54.562362 kernel: audit: type=1334 audit(1768824054.504:399): prog-id=81 op=UNLOAD Jan 19 12:00:54.562416 kernel: audit: type=1334 audit(1768824054.504:400): prog-id=82 op=UNLOAD Jan 19 12:00:54.562438 kernel: audit: type=1334 audit(1768824054.506:401): prog-id=116 op=LOAD Jan 19 12:00:54.562457 kernel: audit: type=1334 audit(1768824054.506:402): prog-id=73 op=UNLOAD Jan 19 12:00:54.562478 kernel: audit: type=1334 audit(1768824054.506:403): prog-id=117 op=LOAD Jan 19 12:00:54.504000 audit: BPF prog-id=82 op=UNLOAD Jan 19 12:00:54.506000 audit: BPF prog-id=116 op=LOAD Jan 19 12:00:54.506000 audit: BPF prog-id=73 op=UNLOAD Jan 19 12:00:54.506000 audit: BPF prog-id=117 op=LOAD Jan 19 12:00:54.506000 audit: BPF prog-id=118 op=LOAD Jan 19 12:00:54.506000 audit: BPF prog-id=74 op=UNLOAD Jan 19 12:00:54.506000 audit: BPF prog-id=75 op=UNLOAD Jan 19 12:00:54.507000 audit: BPF prog-id=119 op=LOAD Jan 19 12:00:54.507000 audit: BPF prog-id=71 op=UNLOAD Jan 19 12:00:54.507000 audit: BPF prog-id=120 op=LOAD Jan 19 12:00:54.507000 audit: BPF prog-id=76 op=UNLOAD Jan 19 12:00:54.507000 audit: BPF prog-id=121 op=LOAD Jan 19 12:00:54.507000 audit: BPF prog-id=122 op=LOAD Jan 19 12:00:54.507000 audit: BPF prog-id=69 op=UNLOAD Jan 19 12:00:54.507000 audit: BPF prog-id=70 op=UNLOAD Jan 19 12:00:54.512000 audit: BPF prog-id=123 op=LOAD Jan 19 12:00:54.512000 audit: BPF prog-id=63 op=UNLOAD Jan 19 12:00:54.512000 audit: BPF prog-id=124 op=LOAD Jan 19 12:00:54.512000 audit: BPF prog-id=125 op=LOAD Jan 19 12:00:54.512000 audit: BPF prog-id=64 op=UNLOAD Jan 19 12:00:54.512000 audit: BPF prog-id=65 op=UNLOAD Jan 19 12:00:54.514000 audit: BPF prog-id=126 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=77 op=UNLOAD Jan 19 12:00:54.514000 audit: BPF prog-id=127 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=128 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=78 op=UNLOAD Jan 19 12:00:54.514000 audit: BPF prog-id=79 op=UNLOAD Jan 19 12:00:54.514000 audit: BPF prog-id=129 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=66 op=UNLOAD Jan 19 12:00:54.514000 audit: BPF prog-id=130 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=131 op=LOAD Jan 19 12:00:54.514000 audit: BPF prog-id=67 op=UNLOAD Jan 19 12:00:54.519000 audit: BPF prog-id=68 op=UNLOAD Jan 19 12:00:54.520000 audit: BPF prog-id=132 op=LOAD Jan 19 12:00:54.520000 audit: BPF prog-id=72 op=UNLOAD Jan 19 12:00:54.837318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 12:00:54.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:00:54.852494 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 12:00:54.994568 kubelet[2784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:00:54.994568 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 12:00:54.994568 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 12:00:54.994568 kubelet[2784]: I0119 12:00:54.994340 2784 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 12:00:55.005902 kubelet[2784]: I0119 12:00:55.005744 2784 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 19 12:00:55.005902 kubelet[2784]: I0119 12:00:55.005905 2784 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 12:00:55.006341 kubelet[2784]: I0119 12:00:55.006248 2784 server.go:956] "Client rotation is on, will bootstrap in background" Jan 19 12:00:55.007905 kubelet[2784]: I0119 12:00:55.007606 2784 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 19 12:00:55.019695 kubelet[2784]: I0119 12:00:55.019676 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 12:00:55.032777 kubelet[2784]: I0119 12:00:55.032700 2784 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 12:00:55.040583 kubelet[2784]: I0119 12:00:55.040432 2784 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 19 12:00:55.041357 kubelet[2784]: I0119 12:00:55.040617 2784 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 12:00:55.041357 kubelet[2784]: I0119 12:00:55.040639 2784 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 12:00:55.041357 kubelet[2784]: I0119 12:00:55.040768 2784 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 12:00:55.041357 kubelet[2784]: I0119 12:00:55.040776 2784 container_manager_linux.go:303] "Creating device plugin manager" Jan 19 12:00:55.042488 kubelet[2784]: I0119 12:00:55.042348 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:00:55.042765 kubelet[2784]: I0119 12:00:55.042628 2784 kubelet.go:480] "Attempting to sync node with API server" Jan 19 12:00:55.042765 kubelet[2784]: I0119 12:00:55.042718 2784 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 12:00:55.044880 kubelet[2784]: I0119 12:00:55.044765 2784 kubelet.go:386] "Adding apiserver pod source" Jan 19 12:00:55.048236 kubelet[2784]: I0119 12:00:55.047769 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 12:00:55.052514 kubelet[2784]: I0119 12:00:55.052419 2784 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 12:00:55.054901 kubelet[2784]: I0119 12:00:55.054441 2784 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 19 12:00:55.066379 kubelet[2784]: I0119 12:00:55.066362 2784 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 19 12:00:55.066476 kubelet[2784]: I0119 12:00:55.066467 2784 server.go:1289] "Started kubelet" Jan 19 12:00:55.068406 kubelet[2784]: I0119 12:00:55.068283 2784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 12:00:55.068968 kubelet[2784]: I0119 12:00:55.068884 2784 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 12:00:55.069433 kubelet[2784]: I0119 12:00:55.069420 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 12:00:55.074255 kubelet[2784]: I0119 12:00:55.073976 2784 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 12:00:55.080644 kubelet[2784]: I0119 12:00:55.080502 2784 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 19 12:00:55.087626 kubelet[2784]: I0119 12:00:55.087579 2784 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 19 12:00:55.090628 kubelet[2784]: I0119 12:00:55.089409 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 12:00:55.092460 kubelet[2784]: I0119 12:00:55.092440 2784 reconciler.go:26] "Reconciler: start to sync state" Jan 19 12:00:55.096649 kubelet[2784]: I0119 12:00:55.094921 2784 server.go:317] "Adding debug handlers to kubelet server" Jan 19 12:00:55.101004 kubelet[2784]: I0119 12:00:55.100481 2784 factory.go:223] Registration of the containerd container factory successfully Jan 19 12:00:55.101004 kubelet[2784]: I0119 12:00:55.100496 2784 factory.go:223] Registration of the systemd container factory successfully Jan 19 12:00:55.101004 kubelet[2784]: I0119 12:00:55.100551 2784 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 12:00:55.203590 kubelet[2784]: I0119 12:00:55.202238 2784 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 19 12:00:55.215434 kubelet[2784]: I0119 12:00:55.214999 2784 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 19 12:00:55.217651 kubelet[2784]: I0119 12:00:55.217518 2784 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 19 12:00:55.218503 kubelet[2784]: I0119 12:00:55.218220 2784 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 12:00:55.219159 kubelet[2784]: I0119 12:00:55.219146 2784 kubelet.go:2436] "Starting kubelet main sync loop" Jan 19 12:00:55.224227 kubelet[2784]: E0119 12:00:55.220918 2784 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 12:00:55.270227 kubelet[2784]: I0119 12:00:55.270199 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 12:00:55.270782 kubelet[2784]: I0119 12:00:55.270638 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 12:00:55.270848 kubelet[2784]: I0119 12:00:55.270839 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 19 12:00:55.273164 kubelet[2784]: I0119 12:00:55.272687 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 19 12:00:55.273576 kubelet[2784]: I0119 12:00:55.273238 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 19 12:00:55.275157 kubelet[2784]: I0119 12:00:55.273747 2784 policy_none.go:49] "None policy: Start" Jan 19 12:00:55.275157 kubelet[2784]: I0119 12:00:55.273993 2784 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 19 12:00:55.275157 kubelet[2784]: I0119 12:00:55.274011 2784 state_mem.go:35] "Initializing new in-memory state store" Jan 19 12:00:55.275602 kubelet[2784]: I0119 12:00:55.275507 2784 state_mem.go:75] "Updated machine memory state" Jan 19 12:00:55.303770 kubelet[2784]: E0119 12:00:55.302977 2784 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 19 12:00:55.310992 kubelet[2784]: I0119 12:00:55.310975 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 12:00:55.312705 kubelet[2784]: I0119 12:00:55.312344 2784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 12:00:55.313414 kubelet[2784]: I0119 12:00:55.313322 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 12:00:55.318224 kubelet[2784]: E0119 12:00:55.317856 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 12:00:55.327605 kubelet[2784]: I0119 12:00:55.327587 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:55.329491 kubelet[2784]: I0119 12:00:55.327814 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.329563 kubelet[2784]: I0119 12:00:55.328417 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:55.349787 kubelet[2784]: E0119 12:00:55.349153 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:55.400010 kubelet[2784]: I0119 12:00:55.399685 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:55.400010 kubelet[2784]: I0119 12:00:55.399792 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:55.400010 kubelet[2784]: I0119 12:00:55.399815 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.400010 kubelet[2784]: I0119 12:00:55.399838 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 19 12:00:55.400010 kubelet[2784]: I0119 12:00:55.399859 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bce0eeeaf069ff8ea3056a00eb2f072d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bce0eeeaf069ff8ea3056a00eb2f072d\") " pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:55.400313 kubelet[2784]: I0119 12:00:55.399881 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.400313 kubelet[2784]: I0119 12:00:55.399898 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.400313 kubelet[2784]: I0119 12:00:55.400167 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.400313 kubelet[2784]: I0119 12:00:55.400195 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:55.439219 kubelet[2784]: I0119 12:00:55.438753 2784 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 19 12:00:55.458433 kubelet[2784]: I0119 12:00:55.457502 2784 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 19 12:00:55.458433 kubelet[2784]: I0119 12:00:55.457568 2784 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 19 12:00:55.643402 kubelet[2784]: E0119 12:00:55.642302 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:55.650275 kubelet[2784]: E0119 12:00:55.650222 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:55.652685 kubelet[2784]: E0119 12:00:55.652436 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:56.049519 kubelet[2784]: I0119 12:00:56.049256 2784 apiserver.go:52] "Watching apiserver" Jan 19 12:00:56.088859 kubelet[2784]: I0119 12:00:56.088744 2784 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 19 12:00:56.158985 kubelet[2784]: I0119 12:00:56.158822 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.158804778 podStartE2EDuration="1.158804778s" podCreationTimestamp="2026-01-19 12:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:00:56.155786494 +0000 UTC m=+1.263196876" watchObservedRunningTime="2026-01-19 12:00:56.158804778 +0000 UTC m=+1.266215150" Jan 19 12:00:56.181507 kubelet[2784]: I0119 12:00:56.180971 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.180840399 podStartE2EDuration="1.180840399s" podCreationTimestamp="2026-01-19 12:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:00:56.180658225 +0000 UTC m=+1.288068607" watchObservedRunningTime="2026-01-19 12:00:56.180840399 +0000 UTC m=+1.288250771" Jan 19 12:00:56.204267 kubelet[2784]: I0119 12:00:56.203623 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.20360451 podStartE2EDuration="4.20360451s" podCreationTimestamp="2026-01-19 12:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:00:56.200652479 +0000 UTC m=+1.308062881" watchObservedRunningTime="2026-01-19 12:00:56.20360451 +0000 UTC m=+1.311014882" Jan 19 12:00:56.257290 kubelet[2784]: E0119 12:00:56.256854 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:56.257915 kubelet[2784]: I0119 12:00:56.257589 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:56.257915 kubelet[2784]: I0119 12:00:56.257733 2784 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:56.278995 kubelet[2784]: E0119 12:00:56.278925 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 19 12:00:56.279500 kubelet[2784]: E0119 12:00:56.279408 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:56.284812 kubelet[2784]: E0119 12:00:56.284501 2784 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 19 12:00:56.284909 kubelet[2784]: E0119 12:00:56.284737 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:57.261487 kubelet[2784]: E0119 12:00:57.261406 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:57.261487 kubelet[2784]: E0119 12:00:57.261406 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:57.262298 kubelet[2784]: E0119 12:00:57.261996 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:57.879006 update_engine[1581]: I20260119 12:00:57.878353 1581 update_attempter.cc:509] Updating boot flags... Jan 19 12:00:58.262996 kubelet[2784]: E0119 12:00:58.262694 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:59.302606 kubelet[2784]: E0119 12:00:59.302314 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:00:59.481568 kubelet[2784]: I0119 12:00:59.481413 2784 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 19 12:00:59.482271 containerd[1598]: time="2026-01-19T12:00:59.481991580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 19 12:00:59.483420 kubelet[2784]: I0119 12:00:59.482952 2784 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 19 12:01:00.269731 kubelet[2784]: E0119 12:01:00.269541 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:00.549743 systemd[1]: Created slice kubepods-besteffort-pod6d110555_baf3_42bb_a324_cdffd184e71c.slice - libcontainer container kubepods-besteffort-pod6d110555_baf3_42bb_a324_cdffd184e71c.slice. Jan 19 12:01:00.655985 kubelet[2784]: I0119 12:01:00.655769 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d110555-baf3-42bb-a324-cdffd184e71c-lib-modules\") pod \"kube-proxy-wmgqw\" (UID: \"6d110555-baf3-42bb-a324-cdffd184e71c\") " pod="kube-system/kube-proxy-wmgqw" Jan 19 12:01:00.655985 kubelet[2784]: I0119 12:01:00.655896 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmf9\" (UniqueName: \"kubernetes.io/projected/6d110555-baf3-42bb-a324-cdffd184e71c-kube-api-access-ffmf9\") pod \"kube-proxy-wmgqw\" (UID: \"6d110555-baf3-42bb-a324-cdffd184e71c\") " pod="kube-system/kube-proxy-wmgqw" Jan 19 12:01:00.655985 kubelet[2784]: I0119 12:01:00.655923 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d110555-baf3-42bb-a324-cdffd184e71c-kube-proxy\") pod \"kube-proxy-wmgqw\" (UID: \"6d110555-baf3-42bb-a324-cdffd184e71c\") " pod="kube-system/kube-proxy-wmgqw" Jan 19 12:01:00.655985 kubelet[2784]: I0119 12:01:00.655940 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d110555-baf3-42bb-a324-cdffd184e71c-xtables-lock\") pod \"kube-proxy-wmgqw\" (UID: \"6d110555-baf3-42bb-a324-cdffd184e71c\") " pod="kube-system/kube-proxy-wmgqw" Jan 19 12:01:00.750831 systemd[1]: Created slice kubepods-besteffort-podd0f288b1_3230_47ae_ab81_48c78a74b02c.slice - libcontainer container kubepods-besteffort-podd0f288b1_3230_47ae_ab81_48c78a74b02c.slice. Jan 19 12:01:00.856930 kubelet[2784]: I0119 12:01:00.856694 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0f288b1-3230-47ae-ab81-48c78a74b02c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wvflf\" (UID: \"d0f288b1-3230-47ae-ab81-48c78a74b02c\") " pod="tigera-operator/tigera-operator-7dcd859c48-wvflf" Jan 19 12:01:00.856930 kubelet[2784]: I0119 12:01:00.856798 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsdw\" (UniqueName: \"kubernetes.io/projected/d0f288b1-3230-47ae-ab81-48c78a74b02c-kube-api-access-4bsdw\") pod \"tigera-operator-7dcd859c48-wvflf\" (UID: \"d0f288b1-3230-47ae-ab81-48c78a74b02c\") " pod="tigera-operator/tigera-operator-7dcd859c48-wvflf" Jan 19 12:01:00.865989 kubelet[2784]: E0119 12:01:00.865875 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:00.868993 containerd[1598]: time="2026-01-19T12:01:00.868427927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmgqw,Uid:6d110555-baf3-42bb-a324-cdffd184e71c,Namespace:kube-system,Attempt:0,}" Jan 19 12:01:00.953423 kubelet[2784]: E0119 12:01:00.952856 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:00.976913 containerd[1598]: time="2026-01-19T12:01:00.976872834Z" level=info msg="connecting to shim fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107" address="unix:///run/containerd/s/354abba6a5bdd3588bf3309450dff61679867f91adb1641ab5c6badea19b009c" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:01:01.056535 containerd[1598]: time="2026-01-19T12:01:01.056498555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wvflf,Uid:d0f288b1-3230-47ae-ab81-48c78a74b02c,Namespace:tigera-operator,Attempt:0,}" Jan 19 12:01:01.102586 systemd[1]: Started cri-containerd-fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107.scope - libcontainer container fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107. Jan 19 12:01:01.141987 containerd[1598]: time="2026-01-19T12:01:01.141872422Z" level=info msg="connecting to shim 0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291" address="unix:///run/containerd/s/9c34f93b5949a146d7efd3bceb4e4f67257901fa2dab1aca81ac17494fe0fed3" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:01:01.145000 audit: BPF prog-id=133 op=LOAD Jan 19 12:01:01.152159 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 19 12:01:01.152227 kernel: audit: type=1334 audit(1768824061.145:436): prog-id=133 op=LOAD Jan 19 12:01:01.146000 audit: BPF prog-id=134 op=LOAD Jan 19 12:01:01.146000 audit[2877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.190486 kernel: audit: type=1334 audit(1768824061.146:437): prog-id=134 op=LOAD Jan 19 12:01:01.190551 kernel: audit: type=1300 audit(1768824061.146:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.190571 kernel: audit: type=1327 audit(1768824061.146:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.217186 kernel: audit: type=1334 audit(1768824061.146:438): prog-id=134 op=UNLOAD Jan 19 12:01:01.146000 audit: BPF prog-id=134 op=UNLOAD Jan 19 12:01:01.146000 audit[2877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.254411 kernel: audit: type=1300 audit(1768824061.146:438): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.254493 kernel: audit: type=1327 audit(1768824061.146:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.273308 kubelet[2784]: E0119 12:01:01.273284 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:01.147000 audit: BPF prog-id=135 op=LOAD Jan 19 12:01:01.287941 kernel: audit: type=1334 audit(1768824061.147:439): prog-id=135 op=LOAD Jan 19 12:01:01.288622 kernel: audit: type=1300 audit(1768824061.147:439): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.147000 audit[2877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.294775 systemd[1]: Started cri-containerd-0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291.scope - libcontainer container 0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291. Jan 19 12:01:01.309476 containerd[1598]: time="2026-01-19T12:01:01.308416845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wmgqw,Uid:6d110555-baf3-42bb-a324-cdffd184e71c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107\"" Jan 19 12:01:01.312186 kubelet[2784]: E0119 12:01:01.309921 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:01.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.333166 containerd[1598]: time="2026-01-19T12:01:01.332181056Z" level=info msg="CreateContainer within sandbox \"fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 19 12:01:01.147000 audit: BPF prog-id=136 op=LOAD Jan 19 12:01:01.147000 audit[2877]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.348267 kernel: audit: type=1327 audit(1768824061.147:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.147000 audit: BPF prog-id=136 op=UNLOAD Jan 19 12:01:01.147000 audit[2877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.147000 audit: BPF prog-id=135 op=UNLOAD Jan 19 12:01:01.147000 audit[2877]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.148000 audit: BPF prog-id=137 op=LOAD Jan 19 12:01:01.148000 audit[2877]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2866 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662323633346264366435613031383762353761326139386661313265 Jan 19 12:01:01.330000 audit: BPF prog-id=138 op=LOAD Jan 19 12:01:01.333000 audit: BPF prog-id=139 op=LOAD Jan 19 12:01:01.333000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.333000 audit: BPF prog-id=139 op=UNLOAD Jan 19 12:01:01.333000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.334000 audit: BPF prog-id=140 op=LOAD Jan 19 12:01:01.334000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.336000 audit: BPF prog-id=141 op=LOAD Jan 19 12:01:01.336000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.336000 audit: BPF prog-id=141 op=UNLOAD Jan 19 12:01:01.336000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.336000 audit: BPF prog-id=140 op=UNLOAD Jan 19 12:01:01.336000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.336000 audit: BPF prog-id=142 op=LOAD Jan 19 12:01:01.336000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393536653637663564376362316661343935303330353864366538 Jan 19 12:01:01.358416 containerd[1598]: time="2026-01-19T12:01:01.357774093Z" level=info msg="Container 742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:01:01.379004 containerd[1598]: time="2026-01-19T12:01:01.378872489Z" level=info msg="CreateContainer within sandbox \"fb2634bd6d5a0187b57a2a98fa12e2ac967d858112b1193cead8b25960c2a107\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677\"" Jan 19 12:01:01.380833 containerd[1598]: time="2026-01-19T12:01:01.380794734Z" level=info msg="StartContainer for \"742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677\"" Jan 19 12:01:01.383969 containerd[1598]: time="2026-01-19T12:01:01.383938713Z" level=info msg="connecting to shim 742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677" address="unix:///run/containerd/s/354abba6a5bdd3588bf3309450dff61679867f91adb1641ab5c6badea19b009c" protocol=ttrpc version=3 Jan 19 12:01:01.435242 containerd[1598]: time="2026-01-19T12:01:01.434408984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wvflf,Uid:d0f288b1-3230-47ae-ab81-48c78a74b02c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291\"" Jan 19 12:01:01.438466 containerd[1598]: time="2026-01-19T12:01:01.437485262Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 19 12:01:01.446324 systemd[1]: Started cri-containerd-742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677.scope - libcontainer container 742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677. Jan 19 12:01:01.537000 audit: BPF prog-id=143 op=LOAD Jan 19 12:01:01.537000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2866 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734326435313139653030613736646237393237343836663562616465 Jan 19 12:01:01.537000 audit: BPF prog-id=144 op=LOAD Jan 19 12:01:01.537000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2866 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734326435313139653030613736646237393237343836663562616465 Jan 19 12:01:01.537000 audit: BPF prog-id=144 op=UNLOAD Jan 19 12:01:01.537000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734326435313139653030613736646237393237343836663562616465 Jan 19 12:01:01.537000 audit: BPF prog-id=143 op=UNLOAD Jan 19 12:01:01.537000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2866 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734326435313139653030613736646237393237343836663562616465 Jan 19 12:01:01.537000 audit: BPF prog-id=145 op=LOAD Jan 19 12:01:01.537000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2866 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:01.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734326435313139653030613736646237393237343836663562616465 Jan 19 12:01:01.618147 containerd[1598]: time="2026-01-19T12:01:01.617386066Z" level=info msg="StartContainer for \"742d5119e00a76db7927486f5badec32db73380d9de059aee8a1ee932a36c677\" returns successfully" Jan 19 12:01:02.033000 audit[3017]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.033000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff18023320 a2=0 a3=7fff1802330c items=0 ppid=2962 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 12:01:02.038000 audit[3018]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.038000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4eca03b0 a2=0 a3=7ffc4eca039c items=0 ppid=2962 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 12:01:02.040000 audit[3021]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.040000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd01e57a90 a2=0 a3=7ffd01e57a7c items=0 ppid=2962 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 12:01:02.046000 audit[3023]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.046000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdea3d9180 a2=0 a3=7ffdea3d916c items=0 ppid=2962 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.046000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 12:01:02.048000 audit[3024]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.048000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8cbc1a80 a2=0 a3=7ffe8cbc1a6c items=0 ppid=2962 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 12:01:02.063000 audit[3025]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.063000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff408a9f0 a2=0 a3=7ffff408a9dc items=0 ppid=2962 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 12:01:02.141000 audit[3026]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.141000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe65cc8a00 a2=0 a3=7ffe65cc89ec items=0 ppid=2962 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 12:01:02.151000 audit[3028]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.151000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd053877e0 a2=0 a3=7ffd053877cc items=0 ppid=2962 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 19 12:01:02.168000 audit[3031]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.168000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe11b76f00 a2=0 a3=7ffe11b76eec items=0 ppid=2962 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 19 12:01:02.173000 audit[3032]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.173000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6566b1d0 a2=0 a3=7ffd6566b1bc items=0 ppid=2962 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 12:01:02.185000 audit[3034]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.185000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffed665320 a2=0 a3=7fffed66530c items=0 ppid=2962 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 12:01:02.189000 audit[3035]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.189000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffc1af820 a2=0 a3=7ffffc1af80c items=0 ppid=2962 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 12:01:02.200000 audit[3037]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.200000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff89126b20 a2=0 a3=7fff89126b0c items=0 ppid=2962 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 19 12:01:02.219000 audit[3040]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.219000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde01eef10 a2=0 a3=7ffde01eeefc items=0 ppid=2962 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 19 12:01:02.224000 audit[3041]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.224000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbb9d0990 a2=0 a3=7ffdbb9d097c items=0 ppid=2962 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 12:01:02.236000 audit[3043]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.236000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1b320250 a2=0 a3=7ffd1b32023c items=0 ppid=2962 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 12:01:02.243000 audit[3044]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.243000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff69bf4db0 a2=0 a3=7fff69bf4d9c items=0 ppid=2962 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 12:01:02.254000 audit[3046]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.254000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc63744700 a2=0 a3=7ffc637446ec items=0 ppid=2962 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 12:01:02.271000 audit[3049]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.271000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3cb9e1d0 a2=0 a3=7ffc3cb9e1bc items=0 ppid=2962 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 12:01:02.287804 kubelet[2784]: E0119 12:01:02.287581 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:02.292000 audit[3052]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.292000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd44910c50 a2=0 a3=7ffd44910c3c items=0 ppid=2962 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 19 12:01:02.302000 audit[3053]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.302000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe573c0e90 a2=0 a3=7ffe573c0e7c items=0 ppid=2962 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 12:01:02.328000 audit[3055]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.328000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff8f28d2a0 a2=0 a3=7fff8f28d28c items=0 ppid=2962 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:01:02.352000 audit[3058]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.352000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff79c074d0 a2=0 a3=7fff79c074bc items=0 ppid=2962 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:01:02.360000 audit[3059]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.360000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7ba87610 a2=0 a3=7ffd7ba875fc items=0 ppid=2962 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 12:01:02.371000 audit[3061]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 12:01:02.371000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe541e0c90 a2=0 a3=7ffe541e0c7c items=0 ppid=2962 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 12:01:02.437000 audit[3067]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:02.437000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe9bbf0e90 a2=0 a3=7ffe9bbf0e7c items=0 ppid=2962 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:02.456000 audit[3067]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:02.456000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe9bbf0e90 a2=0 a3=7ffe9bbf0e7c items=0 ppid=2962 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:02.462000 audit[3072]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.462000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc7ffa1760 a2=0 a3=7ffc7ffa174c items=0 ppid=2962 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 12:01:02.472000 audit[3074]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.472000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd63e5f500 a2=0 a3=7ffd63e5f4ec items=0 ppid=2962 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 19 12:01:02.490000 audit[3077]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.490000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffba47a620 a2=0 a3=7fffba47a60c items=0 ppid=2962 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 19 12:01:02.498000 audit[3078]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.498000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2876a9f0 a2=0 a3=7ffe2876a9dc items=0 ppid=2962 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 12:01:02.510000 audit[3080]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.510000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcec67d0a0 a2=0 a3=7ffcec67d08c items=0 ppid=2962 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 12:01:02.516000 audit[3081]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.516000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff51c71d70 a2=0 a3=7fff51c71d5c items=0 ppid=2962 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.516000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 12:01:02.531000 audit[3083]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.531000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe23def770 a2=0 a3=7ffe23def75c items=0 ppid=2962 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 19 12:01:02.551000 audit[3086]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.551000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd4d0ba6c0 a2=0 a3=7ffd4d0ba6ac items=0 ppid=2962 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 19 12:01:02.559000 audit[3087]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.559000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd19ddef0 a2=0 a3=7fffd19ddedc items=0 ppid=2962 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 12:01:02.571000 audit[3089]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.571000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7b8e8710 a2=0 a3=7ffe7b8e86fc items=0 ppid=2962 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 12:01:02.576000 audit[3090]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.576000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc15c4280 a2=0 a3=7fffc15c426c items=0 ppid=2962 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 12:01:02.588000 audit[3092]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.588000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6c5a7580 a2=0 a3=7ffe6c5a756c items=0 ppid=2962 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 12:01:02.607000 audit[3095]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.607000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9afb70f0 a2=0 a3=7ffd9afb70dc items=0 ppid=2962 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 19 12:01:02.625000 audit[3098]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.625000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4d6b7cc0 a2=0 a3=7ffc4d6b7cac items=0 ppid=2962 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 19 12:01:02.631000 audit[3099]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.631000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe232813f0 a2=0 a3=7ffe232813dc items=0 ppid=2962 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.631000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 12:01:02.644000 audit[3101]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.644000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffec864e210 a2=0 a3=7ffec864e1fc items=0 ppid=2962 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:01:02.661000 audit[3104]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.661000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc9b3b730 a2=0 a3=7ffdc9b3b71c items=0 ppid=2962 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 12:01:02.667000 audit[3105]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.667000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed4412450 a2=0 a3=7ffed441243c items=0 ppid=2962 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 12:01:02.681000 audit[3107]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.681000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd5f9aed40 a2=0 a3=7ffd5f9aed2c items=0 ppid=2962 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 12:01:02.687000 audit[3108]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.687000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2d69ace0 a2=0 a3=7ffe2d69accc items=0 ppid=2962 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 12:01:02.698000 audit[3110]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.698000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff4ee19640 a2=0 a3=7fff4ee1962c items=0 ppid=2962 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:01:02.715000 audit[3113]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 12:01:02.715000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf76dcca0 a2=0 a3=7ffcf76dcc8c items=0 ppid=2962 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 12:01:02.727000 audit[3115]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 12:01:02.727000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe2c105480 a2=0 a3=7ffe2c10546c items=0 ppid=2962 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.727000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:02.727000 audit[3115]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 12:01:02.727000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe2c105480 a2=0 a3=7ffe2c10546c items=0 ppid=2962 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:02.727000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:03.178753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997451047.mount: Deactivated successfully. Jan 19 12:01:04.328170 kubelet[2784]: E0119 12:01:04.327949 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:04.355812 kubelet[2784]: I0119 12:01:04.355631 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wmgqw" podStartSLOduration=4.355619614 podStartE2EDuration="4.355619614s" podCreationTimestamp="2026-01-19 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:01:02.312582522 +0000 UTC m=+7.419992915" watchObservedRunningTime="2026-01-19 12:01:04.355619614 +0000 UTC m=+9.463029986" Jan 19 12:01:05.295739 kubelet[2784]: E0119 12:01:05.295485 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:06.301659 kubelet[2784]: E0119 12:01:06.301367 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:06.322572 containerd[1598]: time="2026-01-19T12:01:06.322288253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:06.324679 containerd[1598]: time="2026-01-19T12:01:06.324480591Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 19 12:01:06.327442 containerd[1598]: time="2026-01-19T12:01:06.327290177Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:06.333361 containerd[1598]: time="2026-01-19T12:01:06.333008889Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:06.333638 containerd[1598]: time="2026-01-19T12:01:06.333533441Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.895928677s" Jan 19 12:01:06.333638 containerd[1598]: time="2026-01-19T12:01:06.333623384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 19 12:01:06.345250 containerd[1598]: time="2026-01-19T12:01:06.345000170Z" level=info msg="CreateContainer within sandbox \"0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 19 12:01:06.362532 containerd[1598]: time="2026-01-19T12:01:06.362377775Z" level=info msg="Container f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:01:06.382798 containerd[1598]: time="2026-01-19T12:01:06.382577159Z" level=info msg="CreateContainer within sandbox \"0e956e67f5d7cb1fa49503058d6e80c23e41453294dece1a8c1217a7bce03291\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5\"" Jan 19 12:01:06.385183 containerd[1598]: time="2026-01-19T12:01:06.384401691Z" level=info msg="StartContainer for \"f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5\"" Jan 19 12:01:06.387285 containerd[1598]: time="2026-01-19T12:01:06.386937017Z" level=info msg="connecting to shim f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5" address="unix:///run/containerd/s/9c34f93b5949a146d7efd3bceb4e4f67257901fa2dab1aca81ac17494fe0fed3" protocol=ttrpc version=3 Jan 19 12:01:06.429649 systemd[1]: Started cri-containerd-f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5.scope - libcontainer container f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5. Jan 19 12:01:06.464000 audit: BPF prog-id=146 op=LOAD Jan 19 12:01:06.472136 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 19 12:01:06.472286 kernel: audit: type=1334 audit(1768824066.464:508): prog-id=146 op=LOAD Jan 19 12:01:06.479308 kernel: audit: type=1334 audit(1768824066.466:509): prog-id=147 op=LOAD Jan 19 12:01:06.466000 audit: BPF prog-id=147 op=LOAD Jan 19 12:01:06.488008 kernel: audit: type=1300 audit(1768824066.466:509): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: BPF prog-id=147 op=UNLOAD Jan 19 12:01:06.548551 kernel: audit: type=1327 audit(1768824066.466:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.548626 kernel: audit: type=1334 audit(1768824066.466:510): prog-id=147 op=UNLOAD Jan 19 12:01:06.548656 kernel: audit: type=1300 audit(1768824066.466:510): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.601932 kernel: audit: type=1327 audit(1768824066.466:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.602219 kernel: audit: type=1334 audit(1768824066.466:511): prog-id=148 op=LOAD Jan 19 12:01:06.466000 audit: BPF prog-id=148 op=LOAD Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.613505 containerd[1598]: time="2026-01-19T12:01:06.612205017Z" level=info msg="StartContainer for \"f2f52c214c6acd01da8cc3ad1ffbe1b3671606dfd7b88bb020d0e52f834205d5\" returns successfully" Jan 19 12:01:06.639960 kernel: audit: type=1300 audit(1768824066.466:511): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.640278 kernel: audit: type=1327 audit(1768824066.466:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: BPF prog-id=149 op=LOAD Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: BPF prog-id=149 op=UNLOAD Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: BPF prog-id=148 op=UNLOAD Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:06.466000 audit: BPF prog-id=150 op=LOAD Jan 19 12:01:06.466000 audit[3126]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2905 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:06.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632663532633231346336616364303164613863633361643166666265 Jan 19 12:01:07.335980 kubelet[2784]: I0119 12:01:07.335744 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wvflf" podStartSLOduration=2.43540818 podStartE2EDuration="7.33572683s" podCreationTimestamp="2026-01-19 12:01:00 +0000 UTC" firstStartedPulling="2026-01-19 12:01:01.435949082 +0000 UTC m=+6.543359454" lastFinishedPulling="2026-01-19 12:01:06.336267731 +0000 UTC m=+11.443678104" observedRunningTime="2026-01-19 12:01:07.335696884 +0000 UTC m=+12.443107256" watchObservedRunningTime="2026-01-19 12:01:07.33572683 +0000 UTC m=+12.443137202" Jan 19 12:01:13.938337 sudo[1824]: pam_unix(sudo:session): session closed for user root Jan 19 12:01:13.937000 audit[1824]: USER_END pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:01:13.975672 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 19 12:01:13.975783 kernel: audit: type=1106 audit(1768824073.937:516): pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:01:13.980206 sshd[1823]: Connection closed by 10.0.0.1 port 35256 Jan 19 12:01:13.978983 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 19 12:01:13.937000 audit[1824]: CRED_DISP pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:01:13.992708 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:35256.service: Deactivated successfully. Jan 19 12:01:14.000740 systemd[1]: session-8.scope: Deactivated successfully. Jan 19 12:01:14.001244 systemd[1]: session-8.scope: Consumed 7.421s CPU time, 212.9M memory peak. Jan 19 12:01:13.987000 audit[1819]: USER_END pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:01:14.056305 kernel: audit: type=1104 audit(1768824073.937:517): pid=1824 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 12:01:14.056383 kernel: audit: type=1106 audit(1768824073.987:518): pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:01:14.058760 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 19 12:01:14.066220 systemd-logind[1580]: Removed session 8. Jan 19 12:01:13.987000 audit[1819]: CRED_DISP pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:01:13.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.26:22-10.0.0.1:35256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:01:14.138516 kernel: audit: type=1104 audit(1768824073.987:519): pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:01:14.140125 kernel: audit: type=1131 audit(1768824073.992:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.26:22-10.0.0.1:35256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:01:14.880000 audit[3219]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:14.915295 kernel: audit: type=1325 audit(1768824074.880:521): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:14.880000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbd47f620 a2=0 a3=7ffdbd47f60c items=0 ppid=2962 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:14.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:15.042321 kernel: audit: type=1300 audit(1768824074.880:521): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdbd47f620 a2=0 a3=7ffdbd47f60c items=0 ppid=2962 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:15.042444 kernel: audit: type=1327 audit(1768824074.880:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:15.042477 kernel: audit: type=1325 audit(1768824074.929:522): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:14.929000 audit[3219]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:15.076893 kernel: audit: type=1300 audit(1768824074.929:522): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbd47f620 a2=0 a3=0 items=0 ppid=2962 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:14.929000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbd47f620 a2=0 a3=0 items=0 ppid=2962 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:14.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:15.246000 audit[3221]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:15.246000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffff8bb5bd0 a2=0 a3=7ffff8bb5bbc items=0 ppid=2962 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:15.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:15.260000 audit[3221]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:15.260000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffff8bb5bd0 a2=0 a3=0 items=0 ppid=2962 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:15.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.761269 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 19 12:01:19.762272 kernel: audit: type=1325 audit(1768824079.731:525): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:19.731000 audit[3225]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:19.731000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7fafec20 a2=0 a3=7ffc7fafec0c items=0 ppid=2962 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:19.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.842971 kernel: audit: type=1300 audit(1768824079.731:525): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7fafec20 a2=0 a3=7ffc7fafec0c items=0 ppid=2962 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:19.843379 kernel: audit: type=1327 audit(1768824079.731:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.819000 audit[3225]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:19.869533 kernel: audit: type=1325 audit(1768824079.819:526): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:19.869632 kernel: audit: type=1300 audit(1768824079.819:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7fafec20 a2=0 a3=0 items=0 ppid=2962 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:19.819000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7fafec20 a2=0 a3=0 items=0 ppid=2962 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:19.910605 kernel: audit: type=1327 audit(1768824079.819:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.948000 audit[3227]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:19.948000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe872bd10 a2=0 a3=7fffe872bcfc items=0 ppid=2962 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:20.023137 kernel: audit: type=1325 audit(1768824079.948:527): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:20.023355 kernel: audit: type=1300 audit(1768824079.948:527): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe872bd10 a2=0 a3=7fffe872bcfc items=0 ppid=2962 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:20.023388 kernel: audit: type=1327 audit(1768824079.948:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:19.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:20.047000 audit[3227]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:20.075441 kernel: audit: type=1325 audit(1768824080.047:528): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:20.047000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe872bd10 a2=0 a3=0 items=0 ppid=2962 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:20.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:21.135000 audit[3229]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:21.135000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcbf129dd0 a2=0 a3=7ffcbf129dbc items=0 ppid=2962 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:21.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:21.142000 audit[3229]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:21.142000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbf129dd0 a2=0 a3=0 items=0 ppid=2962 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:21.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:22.887868 systemd[1]: Created slice kubepods-besteffort-podd2fc4b4e_053f_4b1e_9874_2d7b316d906e.slice - libcontainer container kubepods-besteffort-podd2fc4b4e_053f_4b1e_9874_2d7b316d906e.slice. Jan 19 12:01:22.960748 kubelet[2784]: I0119 12:01:22.960504 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2fc4b4e-053f-4b1e-9874-2d7b316d906e-tigera-ca-bundle\") pod \"calico-typha-5d779687bb-ftjkf\" (UID: \"d2fc4b4e-053f-4b1e-9874-2d7b316d906e\") " pod="calico-system/calico-typha-5d779687bb-ftjkf" Jan 19 12:01:22.960748 kubelet[2784]: I0119 12:01:22.960557 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d2fc4b4e-053f-4b1e-9874-2d7b316d906e-typha-certs\") pod \"calico-typha-5d779687bb-ftjkf\" (UID: \"d2fc4b4e-053f-4b1e-9874-2d7b316d906e\") " pod="calico-system/calico-typha-5d779687bb-ftjkf" Jan 19 12:01:22.960748 kubelet[2784]: I0119 12:01:22.960587 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssk5d\" (UniqueName: \"kubernetes.io/projected/d2fc4b4e-053f-4b1e-9874-2d7b316d906e-kube-api-access-ssk5d\") pod \"calico-typha-5d779687bb-ftjkf\" (UID: \"d2fc4b4e-053f-4b1e-9874-2d7b316d906e\") " pod="calico-system/calico-typha-5d779687bb-ftjkf" Jan 19 12:01:22.993000 audit[3231]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:22.993000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdbcb3ad10 a2=0 a3=7ffdbcb3acfc items=0 ppid=2962 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:22.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:23.005000 audit[3231]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:23.005000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbcb3ad10 a2=0 a3=0 items=0 ppid=2962 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:23.057000 audit[3233]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:23.057000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff21b7dfc0 a2=0 a3=7fff21b7dfac items=0 ppid=2962 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:23.072000 audit[3233]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:23.072000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21b7dfc0 a2=0 a3=0 items=0 ppid=2962 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:23.180802 systemd[1]: Created slice kubepods-besteffort-pod2f95fdcb_8c3f_4ef4_9dfa_7ab1b2468395.slice - libcontainer container kubepods-besteffort-pod2f95fdcb_8c3f_4ef4_9dfa_7ab1b2468395.slice. Jan 19 12:01:23.204844 kubelet[2784]: E0119 12:01:23.204597 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:23.207244 containerd[1598]: time="2026-01-19T12:01:23.206849724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d779687bb-ftjkf,Uid:d2fc4b4e-053f-4b1e-9874-2d7b316d906e,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:23.270266 kubelet[2784]: I0119 12:01:23.263971 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-var-lib-calico\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270266 kubelet[2784]: I0119 12:01:23.264461 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-tigera-ca-bundle\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270266 kubelet[2784]: I0119 12:01:23.264485 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-cni-net-dir\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270266 kubelet[2784]: I0119 12:01:23.264502 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-node-certs\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270266 kubelet[2784]: I0119 12:01:23.264519 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-lib-modules\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270714 kubelet[2784]: I0119 12:01:23.264532 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-var-run-calico\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270714 kubelet[2784]: I0119 12:01:23.264544 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-cni-bin-dir\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270714 kubelet[2784]: I0119 12:01:23.264560 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-flexvol-driver-host\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270714 kubelet[2784]: I0119 12:01:23.264605 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-cni-log-dir\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270714 kubelet[2784]: I0119 12:01:23.264637 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-xtables-lock\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270809 kubelet[2784]: I0119 12:01:23.264662 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-policysync\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.270809 kubelet[2784]: I0119 12:01:23.264683 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x99f\" (UniqueName: \"kubernetes.io/projected/2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395-kube-api-access-6x99f\") pod \"calico-node-z7s9l\" (UID: \"2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395\") " pod="calico-system/calico-node-z7s9l" Jan 19 12:01:23.381217 kubelet[2784]: E0119 12:01:23.380657 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:23.423982 kubelet[2784]: E0119 12:01:23.423596 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.423982 kubelet[2784]: W0119 12:01:23.423699 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.443654 kubelet[2784]: E0119 12:01:23.440773 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.460282 kubelet[2784]: E0119 12:01:23.460254 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.461696 kubelet[2784]: W0119 12:01:23.461206 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.461696 kubelet[2784]: E0119 12:01:23.461231 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.471266 kubelet[2784]: E0119 12:01:23.470230 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.471663 kubelet[2784]: W0119 12:01:23.471453 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.471663 kubelet[2784]: E0119 12:01:23.471476 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.472840 kubelet[2784]: E0119 12:01:23.472823 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.473464 kubelet[2784]: W0119 12:01:23.472905 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.473464 kubelet[2784]: E0119 12:01:23.472926 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.480895 kubelet[2784]: E0119 12:01:23.480601 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.480895 kubelet[2784]: W0119 12:01:23.480615 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.480895 kubelet[2784]: E0119 12:01:23.480628 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.482835 kubelet[2784]: E0119 12:01:23.482734 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.482835 kubelet[2784]: W0119 12:01:23.482754 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.482835 kubelet[2784]: E0119 12:01:23.482769 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.484990 kubelet[2784]: E0119 12:01:23.484517 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.484990 kubelet[2784]: W0119 12:01:23.484531 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.484990 kubelet[2784]: E0119 12:01:23.484545 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.486735 kubelet[2784]: E0119 12:01:23.486658 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.486735 kubelet[2784]: W0119 12:01:23.486675 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.486735 kubelet[2784]: E0119 12:01:23.486689 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.487648 kubelet[2784]: E0119 12:01:23.487636 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.487702 kubelet[2784]: W0119 12:01:23.487692 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.487794 kubelet[2784]: E0119 12:01:23.487747 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.490983 kubelet[2784]: E0119 12:01:23.490921 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.490983 kubelet[2784]: W0119 12:01:23.490939 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.490983 kubelet[2784]: E0119 12:01:23.490955 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.496004 kubelet[2784]: E0119 12:01:23.491664 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:23.507497 kubelet[2784]: E0119 12:01:23.506773 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.507989 kubelet[2784]: W0119 12:01:23.507961 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.509551 kubelet[2784]: E0119 12:01:23.509534 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.514270 kubelet[2784]: E0119 12:01:23.513252 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.514270 kubelet[2784]: W0119 12:01:23.513267 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.514270 kubelet[2784]: E0119 12:01:23.513282 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.515430 containerd[1598]: time="2026-01-19T12:01:23.515285362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7s9l,Uid:2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:23.549681 kubelet[2784]: E0119 12:01:23.549644 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.549854 kubelet[2784]: W0119 12:01:23.549834 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.549917 kubelet[2784]: E0119 12:01:23.549905 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.562534 containerd[1598]: time="2026-01-19T12:01:23.562482351Z" level=info msg="connecting to shim 90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011" address="unix:///run/containerd/s/6773024f4ab98429bc0b557d14cf08175b57a118348df62cd660b8f60a275454" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:01:23.572760 kubelet[2784]: E0119 12:01:23.571543 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.572760 kubelet[2784]: W0119 12:01:23.571567 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.572760 kubelet[2784]: E0119 12:01:23.571588 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.572760 kubelet[2784]: E0119 12:01:23.572526 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.572760 kubelet[2784]: W0119 12:01:23.572538 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.572760 kubelet[2784]: E0119 12:01:23.572551 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.573986 kubelet[2784]: E0119 12:01:23.573463 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.573986 kubelet[2784]: W0119 12:01:23.573480 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.573986 kubelet[2784]: E0119 12:01:23.573492 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.579996 kubelet[2784]: E0119 12:01:23.578920 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.579996 kubelet[2784]: W0119 12:01:23.579724 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.579996 kubelet[2784]: E0119 12:01:23.579744 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.598455 kubelet[2784]: E0119 12:01:23.596465 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.598455 kubelet[2784]: W0119 12:01:23.596490 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.598455 kubelet[2784]: E0119 12:01:23.596513 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.623272 kubelet[2784]: E0119 12:01:23.616814 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.623272 kubelet[2784]: W0119 12:01:23.617475 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.623272 kubelet[2784]: E0119 12:01:23.617505 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.629503 kubelet[2784]: E0119 12:01:23.629479 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.630821 kubelet[2784]: W0119 12:01:23.630796 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.630947 kubelet[2784]: E0119 12:01:23.630928 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.633824 kubelet[2784]: E0119 12:01:23.633808 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.633906 kubelet[2784]: W0119 12:01:23.633889 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.635304 kubelet[2784]: E0119 12:01:23.635283 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.639978 kubelet[2784]: E0119 12:01:23.639514 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.640648 kubelet[2784]: W0119 12:01:23.640631 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.641292 kubelet[2784]: E0119 12:01:23.641269 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.652174 kubelet[2784]: E0119 12:01:23.651315 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.652698 kubelet[2784]: W0119 12:01:23.652281 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.654217 kubelet[2784]: E0119 12:01:23.654196 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.661661 kubelet[2784]: I0119 12:01:23.656800 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7402f958-3527-492d-aaa2-32f171fd00ee-socket-dir\") pod \"csi-node-driver-xx9qj\" (UID: \"7402f958-3527-492d-aaa2-32f171fd00ee\") " pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:23.662707 kubelet[2784]: E0119 12:01:23.661907 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.662707 kubelet[2784]: W0119 12:01:23.661929 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.662707 kubelet[2784]: E0119 12:01:23.661947 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.664655 kubelet[2784]: E0119 12:01:23.664276 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.664655 kubelet[2784]: W0119 12:01:23.664498 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.664655 kubelet[2784]: E0119 12:01:23.664515 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.667728 kubelet[2784]: E0119 12:01:23.667548 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.667728 kubelet[2784]: W0119 12:01:23.667565 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.667728 kubelet[2784]: E0119 12:01:23.667581 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.667728 kubelet[2784]: I0119 12:01:23.667617 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7402f958-3527-492d-aaa2-32f171fd00ee-varrun\") pod \"csi-node-driver-xx9qj\" (UID: \"7402f958-3527-492d-aaa2-32f171fd00ee\") " pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:23.674456 kubelet[2784]: E0119 12:01:23.671723 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.674456 kubelet[2784]: W0119 12:01:23.671819 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.674456 kubelet[2784]: E0119 12:01:23.671832 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.674456 kubelet[2784]: I0119 12:01:23.672569 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862rt\" (UniqueName: \"kubernetes.io/projected/7402f958-3527-492d-aaa2-32f171fd00ee-kube-api-access-862rt\") pod \"csi-node-driver-xx9qj\" (UID: \"7402f958-3527-492d-aaa2-32f171fd00ee\") " pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:23.680945 kubelet[2784]: E0119 12:01:23.680790 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.681002 kubelet[2784]: W0119 12:01:23.680987 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.681280 kubelet[2784]: E0119 12:01:23.681002 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.683860 kubelet[2784]: E0119 12:01:23.682925 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.683860 kubelet[2784]: W0119 12:01:23.682939 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.683860 kubelet[2784]: E0119 12:01:23.682950 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.683860 kubelet[2784]: E0119 12:01:23.683648 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.683860 kubelet[2784]: W0119 12:01:23.683660 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.683860 kubelet[2784]: E0119 12:01:23.683671 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.689258 kubelet[2784]: I0119 12:01:23.685276 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7402f958-3527-492d-aaa2-32f171fd00ee-kubelet-dir\") pod \"csi-node-driver-xx9qj\" (UID: \"7402f958-3527-492d-aaa2-32f171fd00ee\") " pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:23.689258 kubelet[2784]: E0119 12:01:23.687317 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.689258 kubelet[2784]: W0119 12:01:23.687432 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.689258 kubelet[2784]: E0119 12:01:23.687445 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.689258 kubelet[2784]: E0119 12:01:23.688559 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.689258 kubelet[2784]: W0119 12:01:23.688570 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.689258 kubelet[2784]: E0119 12:01:23.688581 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.692466 kubelet[2784]: E0119 12:01:23.689979 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.692466 kubelet[2784]: W0119 12:01:23.689993 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.692466 kubelet[2784]: E0119 12:01:23.690004 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.692466 kubelet[2784]: I0119 12:01:23.692220 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7402f958-3527-492d-aaa2-32f171fd00ee-registration-dir\") pod \"csi-node-driver-xx9qj\" (UID: \"7402f958-3527-492d-aaa2-32f171fd00ee\") " pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:23.694259 kubelet[2784]: E0119 12:01:23.693215 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.694259 kubelet[2784]: W0119 12:01:23.693230 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.694259 kubelet[2784]: E0119 12:01:23.693242 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.695836 kubelet[2784]: E0119 12:01:23.694506 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.695836 kubelet[2784]: W0119 12:01:23.694524 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.695836 kubelet[2784]: E0119 12:01:23.694536 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.698480 kubelet[2784]: E0119 12:01:23.696569 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.698480 kubelet[2784]: W0119 12:01:23.696587 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.698480 kubelet[2784]: E0119 12:01:23.696598 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.701526 kubelet[2784]: E0119 12:01:23.699880 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.701526 kubelet[2784]: W0119 12:01:23.699893 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.701526 kubelet[2784]: E0119 12:01:23.699905 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.701648 containerd[1598]: time="2026-01-19T12:01:23.700425359Z" level=info msg="connecting to shim 9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765" address="unix:///run/containerd/s/7b617b2e64714ad2b0571e9b271b9bf6daf90e7ce3b137ed3a790995e5dd29c5" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:01:23.732441 systemd[1]: Started cri-containerd-90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011.scope - libcontainer container 90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011. Jan 19 12:01:23.796739 kubelet[2784]: E0119 12:01:23.796165 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.796739 kubelet[2784]: W0119 12:01:23.796283 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.796739 kubelet[2784]: E0119 12:01:23.796303 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.799523 kubelet[2784]: E0119 12:01:23.798008 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.799523 kubelet[2784]: W0119 12:01:23.798722 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.799523 kubelet[2784]: E0119 12:01:23.798735 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.802503 kubelet[2784]: E0119 12:01:23.802315 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.802503 kubelet[2784]: W0119 12:01:23.802425 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.802503 kubelet[2784]: E0119 12:01:23.802436 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.803871 kubelet[2784]: E0119 12:01:23.803720 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.803871 kubelet[2784]: W0119 12:01:23.803816 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.803871 kubelet[2784]: E0119 12:01:23.803826 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.808575 kubelet[2784]: E0119 12:01:23.808276 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.808575 kubelet[2784]: W0119 12:01:23.808490 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.808575 kubelet[2784]: E0119 12:01:23.808505 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.809000 audit: BPF prog-id=151 op=LOAD Jan 19 12:01:23.814511 kubelet[2784]: E0119 12:01:23.814492 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.814578 kubelet[2784]: W0119 12:01:23.814566 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.814625 kubelet[2784]: E0119 12:01:23.814615 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.815740 kubelet[2784]: E0119 12:01:23.815253 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.815740 kubelet[2784]: W0119 12:01:23.815270 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.815740 kubelet[2784]: E0119 12:01:23.815280 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.814000 audit: BPF prog-id=152 op=LOAD Jan 19 12:01:23.816500 kubelet[2784]: E0119 12:01:23.815811 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.816500 kubelet[2784]: W0119 12:01:23.815821 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.816500 kubelet[2784]: E0119 12:01:23.815829 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.814000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.815000 audit: BPF prog-id=152 op=UNLOAD Jan 19 12:01:23.815000 audit[3284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.815000 audit: BPF prog-id=153 op=LOAD Jan 19 12:01:23.815000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.816000 audit: BPF prog-id=154 op=LOAD Jan 19 12:01:23.816000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.816000 audit: BPF prog-id=154 op=UNLOAD Jan 19 12:01:23.816000 audit[3284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.816000 audit: BPF prog-id=153 op=UNLOAD Jan 19 12:01:23.816000 audit[3284]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.816000 audit: BPF prog-id=155 op=LOAD Jan 19 12:01:23.816000 audit[3284]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3263 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656434633630616235623639653564393938656337393936653634 Jan 19 12:01:23.821544 kubelet[2784]: E0119 12:01:23.817004 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.821544 kubelet[2784]: W0119 12:01:23.817189 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.821544 kubelet[2784]: E0119 12:01:23.817206 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.821544 kubelet[2784]: E0119 12:01:23.820306 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.821544 kubelet[2784]: W0119 12:01:23.820315 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.821544 kubelet[2784]: E0119 12:01:23.820429 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.822304 kubelet[2784]: E0119 12:01:23.822288 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.823507 kubelet[2784]: W0119 12:01:23.823452 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.823507 kubelet[2784]: E0119 12:01:23.823475 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.824259 kubelet[2784]: E0119 12:01:23.823963 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.824259 kubelet[2784]: W0119 12:01:23.823977 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.824259 kubelet[2784]: E0119 12:01:23.823988 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.824553 kubelet[2784]: E0119 12:01:23.824471 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.824553 kubelet[2784]: W0119 12:01:23.824480 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.824553 kubelet[2784]: E0119 12:01:23.824489 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.826292 kubelet[2784]: E0119 12:01:23.824652 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.826292 kubelet[2784]: W0119 12:01:23.824666 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.826292 kubelet[2784]: E0119 12:01:23.824674 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.826292 kubelet[2784]: E0119 12:01:23.824939 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.826292 kubelet[2784]: W0119 12:01:23.824951 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.826292 kubelet[2784]: E0119 12:01:23.824963 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.827963 kubelet[2784]: E0119 12:01:23.827681 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.827963 kubelet[2784]: W0119 12:01:23.827692 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.827963 kubelet[2784]: E0119 12:01:23.827701 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.829260 kubelet[2784]: E0119 12:01:23.828684 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.829260 kubelet[2784]: W0119 12:01:23.828697 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.829260 kubelet[2784]: E0119 12:01:23.828709 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.828878 systemd[1]: Started cri-containerd-9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765.scope - libcontainer container 9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765. Jan 19 12:01:23.830732 kubelet[2784]: E0119 12:01:23.830694 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.830732 kubelet[2784]: W0119 12:01:23.830708 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.830732 kubelet[2784]: E0119 12:01:23.830721 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.832305 kubelet[2784]: E0119 12:01:23.831936 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.834164 kubelet[2784]: W0119 12:01:23.833230 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.834164 kubelet[2784]: E0119 12:01:23.833250 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.839464 kubelet[2784]: E0119 12:01:23.837531 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.839464 kubelet[2784]: W0119 12:01:23.837549 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.839464 kubelet[2784]: E0119 12:01:23.837564 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.842002 kubelet[2784]: E0119 12:01:23.840733 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.842002 kubelet[2784]: W0119 12:01:23.840749 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.842002 kubelet[2784]: E0119 12:01:23.840765 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.842002 kubelet[2784]: E0119 12:01:23.841640 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.842002 kubelet[2784]: W0119 12:01:23.841656 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.842002 kubelet[2784]: E0119 12:01:23.841673 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.851760 kubelet[2784]: E0119 12:01:23.851738 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.852786 kubelet[2784]: W0119 12:01:23.852767 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.852864 kubelet[2784]: E0119 12:01:23.852851 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.857217 kubelet[2784]: E0119 12:01:23.857203 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.857280 kubelet[2784]: W0119 12:01:23.857269 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.857324 kubelet[2784]: E0119 12:01:23.857315 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.858244 kubelet[2784]: E0119 12:01:23.858231 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.858313 kubelet[2784]: W0119 12:01:23.858302 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.858474 kubelet[2784]: E0119 12:01:23.858462 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.876324 kubelet[2784]: E0119 12:01:23.875761 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 12:01:23.876324 kubelet[2784]: W0119 12:01:23.875778 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 12:01:23.876324 kubelet[2784]: E0119 12:01:23.875791 2784 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 12:01:23.877000 audit: BPF prog-id=156 op=LOAD Jan 19 12:01:23.878000 audit: BPF prog-id=157 op=LOAD Jan 19 12:01:23.878000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.878000 audit: BPF prog-id=157 op=UNLOAD Jan 19 12:01:23.878000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.878000 audit: BPF prog-id=158 op=LOAD Jan 19 12:01:23.878000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.879000 audit: BPF prog-id=159 op=LOAD Jan 19 12:01:23.879000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.879000 audit: BPF prog-id=159 op=UNLOAD Jan 19 12:01:23.879000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.879000 audit: BPF prog-id=158 op=UNLOAD Jan 19 12:01:23.879000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.879000 audit: BPF prog-id=160 op=LOAD Jan 19 12:01:23.879000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3311 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:23.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962376434366330656236353438663235653538623365323834396165 Jan 19 12:01:23.976632 containerd[1598]: time="2026-01-19T12:01:23.970919099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d779687bb-ftjkf,Uid:d2fc4b4e-053f-4b1e-9874-2d7b316d906e,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011\"" Jan 19 12:01:23.982192 containerd[1598]: time="2026-01-19T12:01:23.981924095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7s9l,Uid:2f95fdcb-8c3f-4ef4-9dfa-7ab1b2468395,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\"" Jan 19 12:01:23.987537 kubelet[2784]: E0119 12:01:23.986887 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:23.988275 kubelet[2784]: E0119 12:01:23.987940 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:24.000659 containerd[1598]: time="2026-01-19T12:01:24.000218837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 19 12:01:25.162899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735255460.mount: Deactivated successfully. Jan 19 12:01:25.221770 kubelet[2784]: E0119 12:01:25.221582 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:25.509483 containerd[1598]: time="2026-01-19T12:01:25.508754915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:25.513606 containerd[1598]: time="2026-01-19T12:01:25.513167797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 19 12:01:25.517511 containerd[1598]: time="2026-01-19T12:01:25.516919656Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:25.523801 containerd[1598]: time="2026-01-19T12:01:25.523556605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:25.524878 containerd[1598]: time="2026-01-19T12:01:25.524625522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.524374284s" Jan 19 12:01:25.524878 containerd[1598]: time="2026-01-19T12:01:25.524759496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 19 12:01:25.534130 containerd[1598]: time="2026-01-19T12:01:25.531976498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 19 12:01:25.546326 containerd[1598]: time="2026-01-19T12:01:25.544385954Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 19 12:01:25.578504 containerd[1598]: time="2026-01-19T12:01:25.577869458Z" level=info msg="Container 01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:01:25.609565 containerd[1598]: time="2026-01-19T12:01:25.607199566Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911\"" Jan 19 12:01:25.613643 containerd[1598]: time="2026-01-19T12:01:25.612918654Z" level=info msg="StartContainer for \"01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911\"" Jan 19 12:01:25.622935 containerd[1598]: time="2026-01-19T12:01:25.622808465Z" level=info msg="connecting to shim 01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911" address="unix:///run/containerd/s/7b617b2e64714ad2b0571e9b271b9bf6daf90e7ce3b137ed3a790995e5dd29c5" protocol=ttrpc version=3 Jan 19 12:01:25.707620 systemd[1]: Started cri-containerd-01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911.scope - libcontainer container 01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911. Jan 19 12:01:25.833000 audit: BPF prog-id=161 op=LOAD Jan 19 12:01:25.842853 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 19 12:01:25.842933 kernel: audit: type=1334 audit(1768824085.833:551): prog-id=161 op=LOAD Jan 19 12:01:25.833000 audit[3408]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:25.935583 kernel: audit: type=1300 audit(1768824085.833:551): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.935674 kernel: audit: type=1327 audit(1768824085.833:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:25.833000 audit: BPF prog-id=162 op=LOAD Jan 19 12:01:25.951291 kernel: audit: type=1334 audit(1768824085.833:552): prog-id=162 op=LOAD Jan 19 12:01:25.951354 kernel: audit: type=1300 audit(1768824085.833:552): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit[3408]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:26.014824 containerd[1598]: time="2026-01-19T12:01:26.014671910Z" level=info msg="StartContainer for \"01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911\" returns successfully" Jan 19 12:01:26.022752 kernel: audit: type=1327 audit(1768824085.833:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:25.833000 audit: BPF prog-id=162 op=UNLOAD Jan 19 12:01:26.034344 kernel: audit: type=1334 audit(1768824085.833:553): prog-id=162 op=UNLOAD Jan 19 12:01:25.833000 audit[3408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:26.070814 systemd[1]: cri-containerd-01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911.scope: Deactivated successfully. Jan 19 12:01:26.083713 containerd[1598]: time="2026-01-19T12:01:26.083549409Z" level=info msg="received container exit event container_id:\"01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911\" id:\"01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911\" pid:3420 exited_at:{seconds:1768824086 nanos:82238823}" Jan 19 12:01:26.104826 kernel: audit: type=1300 audit(1768824085.833:553): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:26.104929 kernel: audit: type=1327 audit(1768824085.833:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:25.833000 audit: BPF prog-id=161 op=UNLOAD Jan 19 12:01:25.833000 audit[3408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:25.833000 audit: BPF prog-id=163 op=LOAD Jan 19 12:01:25.833000 audit[3408]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3311 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:25.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616566323739323762646464656633313663653437343766313564 Jan 19 12:01:26.117199 kernel: audit: type=1334 audit(1768824085.833:554): prog-id=161 op=UNLOAD Jan 19 12:01:26.118000 audit: BPF prog-id=163 op=UNLOAD Jan 19 12:01:26.262782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01aef27927bdddef316ce4747f15dc0d04431b90d49dd70ae0aec333c4520911-rootfs.mount: Deactivated successfully. Jan 19 12:01:26.491379 kubelet[2784]: E0119 12:01:26.490815 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:27.221671 kubelet[2784]: E0119 12:01:27.221258 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:29.224238 kubelet[2784]: E0119 12:01:29.223649 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:31.223550 kubelet[2784]: E0119 12:01:31.223307 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:31.663983 containerd[1598]: time="2026-01-19T12:01:31.663481809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:31.678392 containerd[1598]: time="2026-01-19T12:01:31.677725295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 19 12:01:31.681523 containerd[1598]: time="2026-01-19T12:01:31.681349604Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:31.688914 containerd[1598]: time="2026-01-19T12:01:31.688810800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:31.689723 containerd[1598]: time="2026-01-19T12:01:31.689326561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 6.157324745s" Jan 19 12:01:31.689723 containerd[1598]: time="2026-01-19T12:01:31.689352781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 19 12:01:31.694777 containerd[1598]: time="2026-01-19T12:01:31.693280750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 19 12:01:31.734980 containerd[1598]: time="2026-01-19T12:01:31.734921172Z" level=info msg="CreateContainer within sandbox \"90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 19 12:01:31.762774 containerd[1598]: time="2026-01-19T12:01:31.762381616Z" level=info msg="Container 46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:01:31.808203 containerd[1598]: time="2026-01-19T12:01:31.807276617Z" level=info msg="CreateContainer within sandbox \"90ed4c60ab5b69e5d998ec7996e641d5766cd286db93f278c76a8a89d368d011\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9\"" Jan 19 12:01:31.808765 containerd[1598]: time="2026-01-19T12:01:31.808743557Z" level=info msg="StartContainer for \"46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9\"" Jan 19 12:01:31.815164 containerd[1598]: time="2026-01-19T12:01:31.813721218Z" level=info msg="connecting to shim 46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9" address="unix:///run/containerd/s/6773024f4ab98429bc0b557d14cf08175b57a118348df62cd660b8f60a275454" protocol=ttrpc version=3 Jan 19 12:01:31.902798 systemd[1]: Started cri-containerd-46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9.scope - libcontainer container 46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9. Jan 19 12:01:32.077000 audit: BPF prog-id=164 op=LOAD Jan 19 12:01:32.093228 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 19 12:01:32.093591 kernel: audit: type=1334 audit(1768824092.077:557): prog-id=164 op=LOAD Jan 19 12:01:32.079000 audit: BPF prog-id=165 op=LOAD Jan 19 12:01:32.113452 kernel: audit: type=1334 audit(1768824092.079:558): prog-id=165 op=LOAD Jan 19 12:01:32.113508 kernel: audit: type=1300 audit(1768824092.079:558): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.212464 kernel: audit: type=1327 audit(1768824092.079:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.212597 kernel: audit: type=1334 audit(1768824092.079:559): prog-id=165 op=UNLOAD Jan 19 12:01:32.079000 audit: BPF prog-id=165 op=UNLOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.275001 kernel: audit: type=1300 audit(1768824092.079:559): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.277388 kernel: audit: type=1327 audit(1768824092.079:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: BPF prog-id=166 op=LOAD Jan 19 12:01:32.338970 kernel: audit: type=1334 audit(1768824092.079:560): prog-id=166 op=LOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.357317 containerd[1598]: time="2026-01-19T12:01:32.356344690Z" level=info msg="StartContainer for \"46f288fcb306f469c387f267902935f1e66b0bb2908c62b0888fea83352fdbd9\" returns successfully" Jan 19 12:01:32.382755 kernel: audit: type=1300 audit(1768824092.079:560): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.440902 kernel: audit: type=1327 audit(1768824092.079:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: BPF prog-id=167 op=LOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: BPF prog-id=167 op=UNLOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: BPF prog-id=166 op=UNLOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.079000 audit: BPF prog-id=168 op=LOAD Jan 19 12:01:32.079000 audit[3468]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3263 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:32.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436663238386663623330366634363963333837663236373930323933 Jan 19 12:01:32.548126 kubelet[2784]: E0119 12:01:32.545465 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:33.220243 kubelet[2784]: E0119 12:01:33.219839 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:33.553270 kubelet[2784]: I0119 12:01:33.552322 2784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 19 12:01:33.554902 kubelet[2784]: E0119 12:01:33.554888 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:35.225476 kubelet[2784]: E0119 12:01:35.225423 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:37.220650 kubelet[2784]: E0119 12:01:37.220608 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:38.861561 containerd[1598]: time="2026-01-19T12:01:38.860988217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:38.865420 containerd[1598]: time="2026-01-19T12:01:38.865376143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 19 12:01:38.869532 containerd[1598]: time="2026-01-19T12:01:38.869446514Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:38.875985 containerd[1598]: time="2026-01-19T12:01:38.875410175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:01:38.878502 containerd[1598]: time="2026-01-19T12:01:38.876564189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.183250157s" Jan 19 12:01:38.879342 containerd[1598]: time="2026-01-19T12:01:38.878657169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 19 12:01:38.890694 containerd[1598]: time="2026-01-19T12:01:38.890509634Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 19 12:01:38.926716 containerd[1598]: time="2026-01-19T12:01:38.926389472Z" level=info msg="Container 5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:01:38.953255 containerd[1598]: time="2026-01-19T12:01:38.952965423Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b\"" Jan 19 12:01:38.956350 containerd[1598]: time="2026-01-19T12:01:38.955633263Z" level=info msg="StartContainer for \"5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b\"" Jan 19 12:01:38.958714 containerd[1598]: time="2026-01-19T12:01:38.958680738Z" level=info msg="connecting to shim 5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b" address="unix:///run/containerd/s/7b617b2e64714ad2b0571e9b271b9bf6daf90e7ce3b137ed3a790995e5dd29c5" protocol=ttrpc version=3 Jan 19 12:01:39.040530 systemd[1]: Started cri-containerd-5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b.scope - libcontainer container 5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b. Jan 19 12:01:39.181000 audit: BPF prog-id=169 op=LOAD Jan 19 12:01:39.191993 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 19 12:01:39.192320 kernel: audit: type=1334 audit(1768824099.181:565): prog-id=169 op=LOAD Jan 19 12:01:39.181000 audit[3516]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.254728 kubelet[2784]: E0119 12:01:39.254439 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:39.286380 kernel: audit: type=1300 audit(1768824099.181:565): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.286510 kernel: audit: type=1327 audit(1768824099.181:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.288959 kernel: audit: type=1334 audit(1768824099.181:566): prog-id=170 op=LOAD Jan 19 12:01:39.181000 audit: BPF prog-id=170 op=LOAD Jan 19 12:01:39.181000 audit[3516]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.343687 kernel: audit: type=1300 audit(1768824099.181:566): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.378650 containerd[1598]: time="2026-01-19T12:01:39.377613002Z" level=info msg="StartContainer for \"5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b\" returns successfully" Jan 19 12:01:39.388336 kernel: audit: type=1327 audit(1768824099.181:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.182000 audit: BPF prog-id=170 op=UNLOAD Jan 19 12:01:39.401469 kernel: audit: type=1334 audit(1768824099.182:567): prog-id=170 op=UNLOAD Jan 19 12:01:39.401599 kernel: audit: type=1300 audit(1768824099.182:567): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.182000 audit[3516]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.488224 kernel: audit: type=1327 audit(1768824099.182:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.489636 kernel: audit: type=1334 audit(1768824099.182:568): prog-id=169 op=UNLOAD Jan 19 12:01:39.182000 audit: BPF prog-id=169 op=UNLOAD Jan 19 12:01:39.182000 audit[3516]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.182000 audit: BPF prog-id=171 op=LOAD Jan 19 12:01:39.182000 audit[3516]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3311 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:39.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363462646564306239643333653030623431383965343532656333 Jan 19 12:01:39.585349 kubelet[2784]: E0119 12:01:39.583724 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:39.643484 kubelet[2784]: I0119 12:01:39.642985 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d779687bb-ftjkf" podStartSLOduration=9.940338031 podStartE2EDuration="17.642965678s" podCreationTimestamp="2026-01-19 12:01:22 +0000 UTC" firstStartedPulling="2026-01-19 12:01:23.989777502 +0000 UTC m=+29.097187873" lastFinishedPulling="2026-01-19 12:01:31.692405147 +0000 UTC m=+36.799815520" observedRunningTime="2026-01-19 12:01:32.609388699 +0000 UTC m=+37.716799111" watchObservedRunningTime="2026-01-19 12:01:39.642965678 +0000 UTC m=+44.750376060" Jan 19 12:01:40.597251 kubelet[2784]: E0119 12:01:40.596530 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:41.055551 systemd[1]: cri-containerd-5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b.scope: Deactivated successfully. Jan 19 12:01:41.057009 systemd[1]: cri-containerd-5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b.scope: Consumed 2.166s CPU time, 180.4M memory peak, 5.6M read from disk, 171.3M written to disk. Jan 19 12:01:41.061000 audit: BPF prog-id=171 op=UNLOAD Jan 19 12:01:41.069004 containerd[1598]: time="2026-01-19T12:01:41.068746538Z" level=info msg="received container exit event container_id:\"5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b\" id:\"5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b\" pid:3528 exited_at:{seconds:1768824101 nanos:64695734}" Jan 19 12:01:41.183819 kubelet[2784]: I0119 12:01:41.183468 2784 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 19 12:01:41.217360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5964bded0b9d33e00b4189e452ec3f568fb87e9479aff63b33b78f6edc6fbd0b-rootfs.mount: Deactivated successfully. Jan 19 12:01:41.286774 systemd[1]: Created slice kubepods-besteffort-pod7402f958_3527_492d_aaa2_32f171fd00ee.slice - libcontainer container kubepods-besteffort-pod7402f958_3527_492d_aaa2_32f171fd00ee.slice. Jan 19 12:01:41.321579 containerd[1598]: time="2026-01-19T12:01:41.319680719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:41.408415 kubelet[2784]: I0119 12:01:41.408306 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6vm\" (UniqueName: \"kubernetes.io/projected/b7bafa95-2a0c-41ee-a149-7208583b6960-kube-api-access-7s6vm\") pod \"calico-kube-controllers-74c9b656d5-9mc5x\" (UID: \"b7bafa95-2a0c-41ee-a149-7208583b6960\") " pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:41.408415 kubelet[2784]: I0119 12:01:41.408358 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7bafa95-2a0c-41ee-a149-7208583b6960-tigera-ca-bundle\") pod \"calico-kube-controllers-74c9b656d5-9mc5x\" (UID: \"b7bafa95-2a0c-41ee-a149-7208583b6960\") " pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:41.428385 systemd[1]: Created slice kubepods-besteffort-podb7bafa95_2a0c_41ee_a149_7208583b6960.slice - libcontainer container kubepods-besteffort-podb7bafa95_2a0c_41ee_a149_7208583b6960.slice. Jan 19 12:01:41.508707 systemd[1]: Created slice kubepods-burstable-podafffad65_2533_4140_be9c_666164ec7581.slice - libcontainer container kubepods-burstable-podafffad65_2533_4140_be9c_666164ec7581.slice. Jan 19 12:01:41.514825 kubelet[2784]: I0119 12:01:41.512581 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-backend-key-pair\") pod \"whisker-f685f8cc-t6mtc\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:41.514825 kubelet[2784]: I0119 12:01:41.513514 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcr55\" (UniqueName: \"kubernetes.io/projected/bd84ecc0-0a49-4305-b46e-8f992897ba53-kube-api-access-zcr55\") pod \"calico-apiserver-75b8686f69-vv49f\" (UID: \"bd84ecc0-0a49-4305-b46e-8f992897ba53\") " pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:41.514825 kubelet[2784]: I0119 12:01:41.513560 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afffad65-2533-4140-be9c-666164ec7581-config-volume\") pod \"coredns-674b8bbfcf-hpwks\" (UID: \"afffad65-2533-4140-be9c-666164ec7581\") " pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:41.514825 kubelet[2784]: I0119 12:01:41.513701 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-ca-bundle\") pod \"whisker-f685f8cc-t6mtc\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:41.537837 kubelet[2784]: I0119 12:01:41.536981 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d496d\" (UniqueName: \"kubernetes.io/projected/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-kube-api-access-d496d\") pod \"whisker-f685f8cc-t6mtc\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:41.549244 kubelet[2784]: I0119 12:01:41.549000 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgvr\" (UniqueName: \"kubernetes.io/projected/afffad65-2533-4140-be9c-666164ec7581-kube-api-access-cvgvr\") pod \"coredns-674b8bbfcf-hpwks\" (UID: \"afffad65-2533-4140-be9c-666164ec7581\") " pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:41.549244 kubelet[2784]: I0119 12:01:41.549218 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd84ecc0-0a49-4305-b46e-8f992897ba53-calico-apiserver-certs\") pod \"calico-apiserver-75b8686f69-vv49f\" (UID: \"bd84ecc0-0a49-4305-b46e-8f992897ba53\") " pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:41.559836 systemd[1]: Created slice kubepods-besteffort-podbd84ecc0_0a49_4305_b46e_8f992897ba53.slice - libcontainer container kubepods-besteffort-podbd84ecc0_0a49_4305_b46e_8f992897ba53.slice. Jan 19 12:01:41.610779 systemd[1]: Created slice kubepods-besteffort-podcccc157f_015a_4ddd_9ea1_4caf6cdd3948.slice - libcontainer container kubepods-besteffort-podcccc157f_015a_4ddd_9ea1_4caf6cdd3948.slice. Jan 19 12:01:41.653858 kubelet[2784]: I0119 12:01:41.653252 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxw6n\" (UniqueName: \"kubernetes.io/projected/668ec2ef-a9db-464e-8f8c-faf18a92d85c-kube-api-access-xxw6n\") pod \"coredns-674b8bbfcf-pds2j\" (UID: \"668ec2ef-a9db-464e-8f8c-faf18a92d85c\") " pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:41.653858 kubelet[2784]: I0119 12:01:41.653437 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pgp\" (UniqueName: \"kubernetes.io/projected/de448f12-2894-4550-a3a5-5ddf27420cbb-kube-api-access-q8pgp\") pod \"calico-apiserver-75b8686f69-mdc5b\" (UID: \"de448f12-2894-4550-a3a5-5ddf27420cbb\") " pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:41.653858 kubelet[2784]: I0119 12:01:41.653465 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f3cc3a4-a155-47d4-9e99-0f5c3bb53331-goldmane-ca-bundle\") pod \"goldmane-666569f655-xg6zc\" (UID: \"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331\") " pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:41.653858 kubelet[2784]: I0119 12:01:41.653509 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de448f12-2894-4550-a3a5-5ddf27420cbb-calico-apiserver-certs\") pod \"calico-apiserver-75b8686f69-mdc5b\" (UID: \"de448f12-2894-4550-a3a5-5ddf27420cbb\") " pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:41.653858 kubelet[2784]: I0119 12:01:41.653535 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/668ec2ef-a9db-464e-8f8c-faf18a92d85c-config-volume\") pod \"coredns-674b8bbfcf-pds2j\" (UID: \"668ec2ef-a9db-464e-8f8c-faf18a92d85c\") " pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:41.654821 kubelet[2784]: I0119 12:01:41.653560 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3f3cc3a4-a155-47d4-9e99-0f5c3bb53331-goldmane-key-pair\") pod \"goldmane-666569f655-xg6zc\" (UID: \"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331\") " pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:41.654821 kubelet[2784]: I0119 12:01:41.653601 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3cc3a4-a155-47d4-9e99-0f5c3bb53331-config\") pod \"goldmane-666569f655-xg6zc\" (UID: \"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331\") " pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:41.654821 kubelet[2784]: I0119 12:01:41.653622 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlngv\" (UniqueName: \"kubernetes.io/projected/3f3cc3a4-a155-47d4-9e99-0f5c3bb53331-kube-api-access-zlngv\") pod \"goldmane-666569f655-xg6zc\" (UID: \"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331\") " pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:41.706380 systemd[1]: Created slice kubepods-burstable-pod668ec2ef_a9db_464e_8f8c_faf18a92d85c.slice - libcontainer container kubepods-burstable-pod668ec2ef_a9db_464e_8f8c_faf18a92d85c.slice. Jan 19 12:01:41.741548 kubelet[2784]: E0119 12:01:41.741523 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:41.749796 containerd[1598]: time="2026-01-19T12:01:41.749455339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:41.764431 containerd[1598]: time="2026-01-19T12:01:41.749822396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 19 12:01:41.793866 systemd[1]: Created slice kubepods-besteffort-pod3f3cc3a4_a155_47d4_9e99_0f5c3bb53331.slice - libcontainer container kubepods-besteffort-pod3f3cc3a4_a155_47d4_9e99_0f5c3bb53331.slice. Jan 19 12:01:41.839011 systemd[1]: Created slice kubepods-besteffort-podde448f12_2894_4550_a3a5_5ddf27420cbb.slice - libcontainer container kubepods-besteffort-podde448f12_2894_4550_a3a5_5ddf27420cbb.slice. Jan 19 12:01:41.889335 kubelet[2784]: E0119 12:01:41.888431 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:41.894388 containerd[1598]: time="2026-01-19T12:01:41.892668066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,}" Jan 19 12:01:41.926288 containerd[1598]: time="2026-01-19T12:01:41.926004053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:01:41.944245 containerd[1598]: time="2026-01-19T12:01:41.943281531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:42.102244 kubelet[2784]: E0119 12:01:42.102006 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:42.105583 containerd[1598]: time="2026-01-19T12:01:42.105546018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,}" Jan 19 12:01:42.124753 containerd[1598]: time="2026-01-19T12:01:42.124687353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:42.155288 containerd[1598]: time="2026-01-19T12:01:42.154558168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:01:42.389819 containerd[1598]: time="2026-01-19T12:01:42.387261384Z" level=error msg="Failed to destroy network for sandbox \"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.392408 systemd[1]: run-netns-cni\x2d725020a4\x2dd06c\x2d3b62\x2d7f37\x2d245c99d3e18d.mount: Deactivated successfully. Jan 19 12:01:42.465513 containerd[1598]: time="2026-01-19T12:01:42.464593513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.474775 kubelet[2784]: E0119 12:01:42.472468 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.474775 kubelet[2784]: E0119 12:01:42.472563 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:42.474775 kubelet[2784]: E0119 12:01:42.472590 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:42.475372 kubelet[2784]: E0119 12:01:42.472655 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1ae7177ed3878d2e309abba2cffa07e6e7f41fed17071dbe0abe472b700fa2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:01:42.504259 containerd[1598]: time="2026-01-19T12:01:42.498805346Z" level=error msg="Failed to destroy network for sandbox \"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.502399 systemd[1]: run-netns-cni\x2d7f02ab2d\x2dae7e\x2d9225\x2de9d5\x2df39382cad634.mount: Deactivated successfully. Jan 19 12:01:42.530295 containerd[1598]: time="2026-01-19T12:01:42.526313123Z" level=error msg="Failed to destroy network for sandbox \"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.529824 systemd[1]: run-netns-cni\x2df7276885\x2d7879\x2d5f74\x2d8087\x2db7510e5c0cb4.mount: Deactivated successfully. Jan 19 12:01:42.556800 containerd[1598]: time="2026-01-19T12:01:42.555528511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.557518 kubelet[2784]: E0119 12:01:42.556524 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.557518 kubelet[2784]: E0119 12:01:42.556582 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:42.557518 kubelet[2784]: E0119 12:01:42.556606 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:42.557669 kubelet[2784]: E0119 12:01:42.556647 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c994e17099f1a3fabfaa6995b23927f355e00150ae637198abc7dae62a668dea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:01:42.559467 containerd[1598]: time="2026-01-19T12:01:42.559214919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.559780 kubelet[2784]: E0119 12:01:42.559744 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.559818 kubelet[2784]: E0119 12:01:42.559790 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:42.559840 kubelet[2784]: E0119 12:01:42.559817 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:42.560680 kubelet[2784]: E0119 12:01:42.559869 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5b056545f1ac1e614913396925b1f45987de2165dfe726fc0c619d819a3e373\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:42.653802 containerd[1598]: time="2026-01-19T12:01:42.652779012Z" level=error msg="Failed to destroy network for sandbox \"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.656634 containerd[1598]: time="2026-01-19T12:01:42.656477509Z" level=error msg="Failed to destroy network for sandbox \"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.663398 systemd[1]: run-netns-cni\x2dd81e8508\x2d958b\x2d1544\x2d8164\x2d2850232900d1.mount: Deactivated successfully. Jan 19 12:01:42.663635 systemd[1]: run-netns-cni\x2dae447f55\x2d42fd\x2d177b\x2d88aa\x2d6d5d88c158cf.mount: Deactivated successfully. Jan 19 12:01:42.682354 containerd[1598]: time="2026-01-19T12:01:42.682297905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.685376 kubelet[2784]: E0119 12:01:42.685335 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.686262 kubelet[2784]: E0119 12:01:42.686224 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:42.686397 kubelet[2784]: E0119 12:01:42.686367 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:42.689363 containerd[1598]: time="2026-01-19T12:01:42.688880644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.691340 kubelet[2784]: E0119 12:01:42.690867 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hpwks_kube-system(afffad65-2533-4140-be9c-666164ec7581)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hpwks_kube-system(afffad65-2533-4140-be9c-666164ec7581)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc2a93961059c9b78ad8785f79720d293b3761a0c3820422b91e68f6de33a136\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hpwks" podUID="afffad65-2533-4140-be9c-666164ec7581" Jan 19 12:01:42.691781 kubelet[2784]: E0119 12:01:42.691751 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.692639 kubelet[2784]: E0119 12:01:42.692495 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:42.692639 kubelet[2784]: E0119 12:01:42.692535 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:42.692639 kubelet[2784]: E0119 12:01:42.692589 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d1305a0c912a0c7e0b80d925c4eda188d8af748a88c1729c2df8eac0eb1cd20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f685f8cc-t6mtc" podUID="cccc157f-015a-4ddd-9ea1-4caf6cdd3948" Jan 19 12:01:42.698546 containerd[1598]: time="2026-01-19T12:01:42.698485431Z" level=error msg="Failed to destroy network for sandbox \"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.736482 containerd[1598]: time="2026-01-19T12:01:42.729653342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.749664 kubelet[2784]: E0119 12:01:42.733349 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.749664 kubelet[2784]: E0119 12:01:42.733410 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:42.749664 kubelet[2784]: E0119 12:01:42.733439 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:42.749873 kubelet[2784]: E0119 12:01:42.733492 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16c461c576846a0b7678e531e344219c9530bae3dd5170f360321a4d9109a168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:01:42.824772 containerd[1598]: time="2026-01-19T12:01:42.823451260Z" level=error msg="Failed to destroy network for sandbox \"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.826770 containerd[1598]: time="2026-01-19T12:01:42.826650094Z" level=error msg="Failed to destroy network for sandbox \"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.841731 containerd[1598]: time="2026-01-19T12:01:42.840838065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.846360 kubelet[2784]: E0119 12:01:42.844382 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.846360 kubelet[2784]: E0119 12:01:42.844438 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:42.846360 kubelet[2784]: E0119 12:01:42.844458 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:42.846501 containerd[1598]: time="2026-01-19T12:01:42.844548701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.846755 kubelet[2784]: E0119 12:01:42.844866 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1786b9b1d414d2e07a252ab330c3d189c232f27071b5673428614918ef9bd63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:01:42.846755 kubelet[2784]: E0119 12:01:42.845851 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:42.846755 kubelet[2784]: E0119 12:01:42.846269 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:42.847426 kubelet[2784]: E0119 12:01:42.846300 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:42.847426 kubelet[2784]: E0119 12:01:42.846476 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pds2j_kube-system(668ec2ef-a9db-464e-8f8c-faf18a92d85c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pds2j_kube-system(668ec2ef-a9db-464e-8f8c-faf18a92d85c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb460040b4b2727c62b3d16009de5547ad439404aa6e04c791b48c4049018f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pds2j" podUID="668ec2ef-a9db-464e-8f8c-faf18a92d85c" Jan 19 12:01:43.209868 systemd[1]: run-netns-cni\x2d2bf71035\x2d9a3e\x2df4ed\x2d3bcc\x2d2adeaba878c7.mount: Deactivated successfully. Jan 19 12:01:43.210433 systemd[1]: run-netns-cni\x2d2f747758\x2dcd80\x2db312\x2d17f0\x2d2eda75c60e12.mount: Deactivated successfully. Jan 19 12:01:43.210531 systemd[1]: run-netns-cni\x2d816a6d49\x2d1f9e\x2d92ee\x2dc481\x2dbdf32aaaeef0.mount: Deactivated successfully. Jan 19 12:01:52.196444 kubelet[2784]: E0119 12:01:52.196286 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:52.533514 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 19 12:01:52.535930 kernel: audit: type=1325 audit(1768824112.502:571): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:52.502000 audit[3836]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:52.545490 kernel: audit: type=1300 audit(1768824112.502:571): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffddf4bb4e0 a2=0 a3=7ffddf4bb4cc items=0 ppid=2962 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:52.502000 audit[3836]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffddf4bb4e0 a2=0 a3=7ffddf4bb4cc items=0 ppid=2962 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:52.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:52.626520 kernel: audit: type=1327 audit(1768824112.502:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:52.626634 kernel: audit: type=1325 audit(1768824112.600:572): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:52.600000 audit[3836]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:01:52.648945 kernel: audit: type=1300 audit(1768824112.600:572): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffddf4bb4e0 a2=0 a3=7ffddf4bb4cc items=0 ppid=2962 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:52.600000 audit[3836]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffddf4bb4e0 a2=0 a3=7ffddf4bb4cc items=0 ppid=2962 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:01:52.695444 kernel: audit: type=1327 audit(1768824112.600:572): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:52.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:01:52.926906 kubelet[2784]: E0119 12:01:52.918796 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:54.225865 kubelet[2784]: E0119 12:01:54.225377 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:54.227863 containerd[1598]: time="2026-01-19T12:01:54.226674186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:54.227863 containerd[1598]: time="2026-01-19T12:01:54.227670025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,}" Jan 19 12:01:54.227863 containerd[1598]: time="2026-01-19T12:01:54.227688289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:01:54.227863 containerd[1598]: time="2026-01-19T12:01:54.227727271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:54.778663 containerd[1598]: time="2026-01-19T12:01:54.774841548Z" level=error msg="Failed to destroy network for sandbox \"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.781852 systemd[1]: run-netns-cni\x2de9d2159a\x2deb33\x2dd7e5\x2d3e3a\x2dc9a1d26b0e62.mount: Deactivated successfully. Jan 19 12:01:54.819454 containerd[1598]: time="2026-01-19T12:01:54.818666107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.824535 kubelet[2784]: E0119 12:01:54.823519 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.824535 kubelet[2784]: E0119 12:01:54.823897 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:54.824535 kubelet[2784]: E0119 12:01:54.823926 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:01:54.827272 kubelet[2784]: E0119 12:01:54.824007 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85113a9be18d17db632e4ed194266226f8fa557a30d135d199ce097dab09862c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f685f8cc-t6mtc" podUID="cccc157f-015a-4ddd-9ea1-4caf6cdd3948" Jan 19 12:01:54.838341 containerd[1598]: time="2026-01-19T12:01:54.838009667Z" level=error msg="Failed to destroy network for sandbox \"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.841973 systemd[1]: run-netns-cni\x2d39a5f77f\x2de0a1\x2dc4a6\x2d8be9\x2dcb8ee145aede.mount: Deactivated successfully. Jan 19 12:01:54.864566 containerd[1598]: time="2026-01-19T12:01:54.862965171Z" level=error msg="Failed to destroy network for sandbox \"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.872475 containerd[1598]: time="2026-01-19T12:01:54.872418354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.874823 kubelet[2784]: E0119 12:01:54.873882 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.874823 kubelet[2784]: E0119 12:01:54.873963 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:54.874823 kubelet[2784]: E0119 12:01:54.873998 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:01:54.874986 kubelet[2784]: E0119 12:01:54.874408 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf2693845f1ec857bdcdfd9c7311f6a79a25e4b4b4a032ab4cea1be33e0c19e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:01:54.893372 containerd[1598]: time="2026-01-19T12:01:54.892991203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.900815 kubelet[2784]: E0119 12:01:54.900768 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.900988 kubelet[2784]: E0119 12:01:54.900967 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:54.901427 kubelet[2784]: E0119 12:01:54.901405 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xg6zc" Jan 19 12:01:54.901532 kubelet[2784]: E0119 12:01:54.901508 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17fe22e302fecca2928caeec889fb53b524da2a68d5c7d83c05e7570db7d5cd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:01:54.934540 containerd[1598]: time="2026-01-19T12:01:54.931790949Z" level=error msg="Failed to destroy network for sandbox \"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.983961 containerd[1598]: time="2026-01-19T12:01:54.983902087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.985527 kubelet[2784]: E0119 12:01:54.985387 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:54.985527 kubelet[2784]: E0119 12:01:54.985457 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:54.985527 kubelet[2784]: E0119 12:01:54.985488 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pds2j" Jan 19 12:01:54.990008 kubelet[2784]: E0119 12:01:54.987594 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pds2j_kube-system(668ec2ef-a9db-464e-8f8c-faf18a92d85c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pds2j_kube-system(668ec2ef-a9db-464e-8f8c-faf18a92d85c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9169c7e3bf1db59a91cfa6f7269e72a5cda73e39717dfaa69487561dfa66f43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pds2j" podUID="668ec2ef-a9db-464e-8f8c-faf18a92d85c" Jan 19 12:01:55.341667 systemd[1]: run-netns-cni\x2d4fe54e61\x2d47fa\x2dc018\x2db3c0\x2d65cfbca55aab.mount: Deactivated successfully. Jan 19 12:01:55.341927 systemd[1]: run-netns-cni\x2dffa6df47\x2dafdc\x2dd7e2\x2da230\x2d39d87100cdf8.mount: Deactivated successfully. Jan 19 12:01:56.226410 containerd[1598]: time="2026-01-19T12:01:56.225934590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:56.567810 containerd[1598]: time="2026-01-19T12:01:56.562870652Z" level=error msg="Failed to destroy network for sandbox \"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:56.567917 systemd[1]: run-netns-cni\x2d43052b7d\x2daca0\x2db3ed\x2d87ff\x2d241ac22cecc5.mount: Deactivated successfully. Jan 19 12:01:56.607565 containerd[1598]: time="2026-01-19T12:01:56.606943414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:56.607959 kubelet[2784]: E0119 12:01:56.607757 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:56.607959 kubelet[2784]: E0119 12:01:56.607809 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:56.607959 kubelet[2784]: E0119 12:01:56.607830 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9qj" Jan 19 12:01:56.608729 kubelet[2784]: E0119 12:01:56.607871 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30c2efd50055a28bf526bead818db7b5cc6369d82c373ef68cd99c46e112142f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:01:57.234355 kubelet[2784]: E0119 12:01:57.233691 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:01:57.234901 containerd[1598]: time="2026-01-19T12:01:57.234863737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,}" Jan 19 12:01:57.238779 containerd[1598]: time="2026-01-19T12:01:57.235689964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,}" Jan 19 12:01:57.578337 containerd[1598]: time="2026-01-19T12:01:57.574721048Z" level=error msg="Failed to destroy network for sandbox \"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.580647 systemd[1]: run-netns-cni\x2d77717d11\x2db72c\x2dc373\x2d8a4d\x2dea5f1ee40608.mount: Deactivated successfully. Jan 19 12:01:57.609779 containerd[1598]: time="2026-01-19T12:01:57.609493230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.611549 kubelet[2784]: E0119 12:01:57.610877 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.611549 kubelet[2784]: E0119 12:01:57.611482 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:57.611549 kubelet[2784]: E0119 12:01:57.611517 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" Jan 19 12:01:57.613752 kubelet[2784]: E0119 12:01:57.611579 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b67f7ade2ea334484eddb63b586b9654e9bd47fcdde9554ee2dc3c9e539cc84c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:01:57.662389 containerd[1598]: time="2026-01-19T12:01:57.658434377Z" level=error msg="Failed to destroy network for sandbox \"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.664427 systemd[1]: run-netns-cni\x2d37d67878\x2d6fe9\x2dbb5b\x2d731c\x2d4a4eb5f9c89c.mount: Deactivated successfully. Jan 19 12:01:57.668766 containerd[1598]: time="2026-01-19T12:01:57.668726562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.671971 kubelet[2784]: E0119 12:01:57.671864 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:57.671971 kubelet[2784]: E0119 12:01:57.671933 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:57.671971 kubelet[2784]: E0119 12:01:57.671962 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hpwks" Jan 19 12:01:57.676457 kubelet[2784]: E0119 12:01:57.672363 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hpwks_kube-system(afffad65-2533-4140-be9c-666164ec7581)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hpwks_kube-system(afffad65-2533-4140-be9c-666164ec7581)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f68d27d34d7df79bc8450742db8a9d2d6ba1f91678f5a91065dfa883a10dbc59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hpwks" podUID="afffad65-2533-4140-be9c-666164ec7581" Jan 19 12:01:58.244776 containerd[1598]: time="2026-01-19T12:01:58.243990835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:01:58.569847 containerd[1598]: time="2026-01-19T12:01:58.569399336Z" level=error msg="Failed to destroy network for sandbox \"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:58.575400 systemd[1]: run-netns-cni\x2d17cd9c5e\x2d3a46\x2d0a23\x2d6afa\x2dcab009947808.mount: Deactivated successfully. Jan 19 12:01:58.582405 containerd[1598]: time="2026-01-19T12:01:58.581948601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:58.583234 kubelet[2784]: E0119 12:01:58.582870 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:01:58.583234 kubelet[2784]: E0119 12:01:58.582941 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:58.583234 kubelet[2784]: E0119 12:01:58.582974 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" Jan 19 12:01:58.585786 kubelet[2784]: E0119 12:01:58.585734 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6168e9dc4c0d81608ca8de7d59d256fc64aba158ff6c73ef4a6130abf247b73d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:00.870410 kernel: hrtimer: interrupt took 1174588 ns Jan 19 12:02:06.237942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287613667.mount: Deactivated successfully. Jan 19 12:02:06.394641 containerd[1598]: time="2026-01-19T12:02:06.394010093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 19 12:02:06.410895 containerd[1598]: time="2026-01-19T12:02:06.410673506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 24.64698798s" Jan 19 12:02:06.410895 containerd[1598]: time="2026-01-19T12:02:06.410719452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 19 12:02:06.421624 containerd[1598]: time="2026-01-19T12:02:06.421370918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:02:06.424659 containerd[1598]: time="2026-01-19T12:02:06.424625301Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:02:06.433338 containerd[1598]: time="2026-01-19T12:02:06.433309381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 12:02:06.495336 containerd[1598]: time="2026-01-19T12:02:06.494962028Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 19 12:02:06.574730 containerd[1598]: time="2026-01-19T12:02:06.571928593Z" level=info msg="Container 24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:02:06.574971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236038973.mount: Deactivated successfully. Jan 19 12:02:06.606645 containerd[1598]: time="2026-01-19T12:02:06.606004323Z" level=info msg="CreateContainer within sandbox \"9b7d46c0eb6548f25e58b3e2849ae3f6e3b70ff26349842b144ae0a99766e765\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190\"" Jan 19 12:02:06.613330 containerd[1598]: time="2026-01-19T12:02:06.612797933Z" level=info msg="StartContainer for \"24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190\"" Jan 19 12:02:06.616751 containerd[1598]: time="2026-01-19T12:02:06.615988597Z" level=info msg="connecting to shim 24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190" address="unix:///run/containerd/s/7b617b2e64714ad2b0571e9b271b9bf6daf90e7ce3b137ed3a790995e5dd29c5" protocol=ttrpc version=3 Jan 19 12:02:06.781884 systemd[1]: Started cri-containerd-24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190.scope - libcontainer container 24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190. Jan 19 12:02:06.927000 audit: BPF prog-id=172 op=LOAD Jan 19 12:02:06.927000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:06.992503 kernel: audit: type=1334 audit(1768824126.927:573): prog-id=172 op=LOAD Jan 19 12:02:06.992632 kernel: audit: type=1300 audit(1768824126.927:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:06.992862 kernel: audit: type=1327 audit(1768824126.927:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.928000 audit: BPF prog-id=173 op=LOAD Jan 19 12:02:06.928000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:07.104939 kernel: audit: type=1334 audit(1768824126.928:574): prog-id=173 op=LOAD Jan 19 12:02:07.105556 kernel: audit: type=1300 audit(1768824126.928:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:07.105621 kernel: audit: type=1327 audit(1768824126.928:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.928000 audit: BPF prog-id=173 op=UNLOAD Jan 19 12:02:07.169635 kernel: audit: type=1334 audit(1768824126.928:575): prog-id=173 op=UNLOAD Jan 19 12:02:06.928000 audit[4099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:07.213360 kernel: audit: type=1300 audit(1768824126.928:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:07.213624 kernel: audit: type=1327 audit(1768824126.928:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:07.241367 containerd[1598]: time="2026-01-19T12:02:07.239913980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,}" Jan 19 12:02:07.241999 containerd[1598]: time="2026-01-19T12:02:07.241971655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:02:07.265534 containerd[1598]: time="2026-01-19T12:02:07.265506005Z" level=info msg="StartContainer for \"24e72ded64ed563270e2a568487624c96d832f4debdd4038fbf54bd6c871e190\" returns successfully" Jan 19 12:02:06.928000 audit: BPF prog-id=172 op=UNLOAD Jan 19 12:02:07.280155 kernel: audit: type=1334 audit(1768824126.928:576): prog-id=172 op=UNLOAD Jan 19 12:02:06.928000 audit[4099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:06.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:06.928000 audit: BPF prog-id=174 op=LOAD Jan 19 12:02:06.928000 audit[4099]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3311 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:06.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234653732646564363465643536333237306532613536383438373632 Jan 19 12:02:07.662782 containerd[1598]: time="2026-01-19T12:02:07.657955808Z" level=error msg="Failed to destroy network for sandbox \"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.663399 systemd[1]: run-netns-cni\x2dcc635779\x2d4063\x2deefc\x2da7e9\x2d8db57a726d5e.mount: Deactivated successfully. Jan 19 12:02:07.683862 containerd[1598]: time="2026-01-19T12:02:07.683829104Z" level=error msg="Failed to destroy network for sandbox \"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.692802 systemd[1]: run-netns-cni\x2d987690af\x2dd8ae\x2d00cd\x2d52d8\x2d8a6cf3baaa4c.mount: Deactivated successfully. Jan 19 12:02:07.694659 containerd[1598]: time="2026-01-19T12:02:07.694613092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.702765 kubelet[2784]: E0119 12:02:07.702190 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.702765 kubelet[2784]: E0119 12:02:07.702270 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:02:07.702765 kubelet[2784]: E0119 12:02:07.702298 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" Jan 19 12:02:07.705918 kubelet[2784]: E0119 12:02:07.702357 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"124fb28008199214cb9930c9490f77662f6e47c1a3884edd386418a3947b6392\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:07.707389 kubelet[2784]: E0119 12:02:07.706894 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.707389 kubelet[2784]: E0119 12:02:07.706933 2784 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:02:07.707389 kubelet[2784]: E0119 12:02:07.706960 2784 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f685f8cc-t6mtc" Jan 19 12:02:07.707622 containerd[1598]: time="2026-01-19T12:02:07.706601464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f685f8cc-t6mtc,Uid:cccc157f-015a-4ddd-9ea1-4caf6cdd3948,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 12:02:07.708751 kubelet[2784]: E0119 12:02:07.707004 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f685f8cc-t6mtc_calico-system(cccc157f-015a-4ddd-9ea1-4caf6cdd3948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"187ac8fd0f6f3d934f28dfd0ec1ce6dcfacb18a2a66a062627618edc3bc0e7a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f685f8cc-t6mtc" podUID="cccc157f-015a-4ddd-9ea1-4caf6cdd3948" Jan 19 12:02:07.725809 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 19 12:02:07.725951 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 19 12:02:08.143552 kubelet[2784]: E0119 12:02:08.141695 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:08.228963 kubelet[2784]: E0119 12:02:08.228924 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:08.238717 containerd[1598]: time="2026-01-19T12:02:08.238345300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,}" Jan 19 12:02:08.242229 containerd[1598]: time="2026-01-19T12:02:08.239349373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,}" Jan 19 12:02:08.296250 kubelet[2784]: I0119 12:02:08.290798 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z7s9l" podStartSLOduration=2.864107998 podStartE2EDuration="45.290777706s" podCreationTimestamp="2026-01-19 12:01:23 +0000 UTC" firstStartedPulling="2026-01-19 12:01:23.988856924 +0000 UTC m=+29.096267296" lastFinishedPulling="2026-01-19 12:02:06.415526631 +0000 UTC m=+71.522937004" observedRunningTime="2026-01-19 12:02:08.281351395 +0000 UTC m=+73.388761767" watchObservedRunningTime="2026-01-19 12:02:08.290777706 +0000 UTC m=+73.398188077" Jan 19 12:02:08.456944 kubelet[2784]: I0119 12:02:08.454351 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-backend-key-pair\") pod \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " Jan 19 12:02:08.456944 kubelet[2784]: I0119 12:02:08.454414 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-ca-bundle\") pod \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " Jan 19 12:02:08.456944 kubelet[2784]: I0119 12:02:08.454582 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d496d\" (UniqueName: \"kubernetes.io/projected/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-kube-api-access-d496d\") pod \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\" (UID: \"cccc157f-015a-4ddd-9ea1-4caf6cdd3948\") " Jan 19 12:02:08.456944 kubelet[2784]: I0119 12:02:08.456943 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cccc157f-015a-4ddd-9ea1-4caf6cdd3948" (UID: "cccc157f-015a-4ddd-9ea1-4caf6cdd3948"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 19 12:02:08.498234 systemd[1]: var-lib-kubelet-pods-cccc157f\x2d015a\x2d4ddd\x2d9ea1\x2d4caf6cdd3948-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd496d.mount: Deactivated successfully. Jan 19 12:02:08.498698 systemd[1]: var-lib-kubelet-pods-cccc157f\x2d015a\x2d4ddd\x2d9ea1\x2d4caf6cdd3948-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 19 12:02:08.505877 kubelet[2784]: I0119 12:02:08.505684 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-kube-api-access-d496d" (OuterVolumeSpecName: "kube-api-access-d496d") pod "cccc157f-015a-4ddd-9ea1-4caf6cdd3948" (UID: "cccc157f-015a-4ddd-9ea1-4caf6cdd3948"). InnerVolumeSpecName "kube-api-access-d496d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 19 12:02:08.508771 kubelet[2784]: I0119 12:02:08.508727 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cccc157f-015a-4ddd-9ea1-4caf6cdd3948" (UID: "cccc157f-015a-4ddd-9ea1-4caf6cdd3948"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 19 12:02:08.556545 kubelet[2784]: I0119 12:02:08.556324 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 19 12:02:08.556545 kubelet[2784]: I0119 12:02:08.556360 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 19 12:02:08.556545 kubelet[2784]: I0119 12:02:08.556371 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d496d\" (UniqueName: \"kubernetes.io/projected/cccc157f-015a-4ddd-9ea1-4caf6cdd3948-kube-api-access-d496d\") on node \"localhost\" DevicePath \"\"" Jan 19 12:02:09.141527 kubelet[2784]: E0119 12:02:09.140772 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:09.153846 systemd[1]: Removed slice kubepods-besteffort-podcccc157f_015a_4ddd_9ea1_4caf6cdd3948.slice - libcontainer container kubepods-besteffort-podcccc157f_015a_4ddd_9ea1_4caf6cdd3948.slice. Jan 19 12:02:09.225327 kubelet[2784]: E0119 12:02:09.224361 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:09.225858 kubelet[2784]: E0119 12:02:09.225840 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:09.226674 kubelet[2784]: E0119 12:02:09.226651 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:09.227701 containerd[1598]: time="2026-01-19T12:02:09.226942237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,}" Jan 19 12:02:09.230149 containerd[1598]: time="2026-01-19T12:02:09.229775627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,}" Jan 19 12:02:09.391121 systemd-networkd[1513]: cali479a902d8a5: Link UP Jan 19 12:02:09.395924 systemd-networkd[1513]: cali479a902d8a5: Gained carrier Jan 19 12:02:09.412700 systemd[1]: Created slice kubepods-besteffort-podcfea8a68_b054_48f1_88a4_ee8fd7bce007.slice - libcontainer container kubepods-besteffort-podcfea8a68_b054_48f1_88a4_ee8fd7bce007.slice. Jan 19 12:02:09.481848 kubelet[2784]: I0119 12:02:09.481649 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfea8a68-b054-48f1-88a4-ee8fd7bce007-whisker-ca-bundle\") pod \"whisker-757f87bb85-qrkvc\" (UID: \"cfea8a68-b054-48f1-88a4-ee8fd7bce007\") " pod="calico-system/whisker-757f87bb85-qrkvc" Jan 19 12:02:09.481848 kubelet[2784]: I0119 12:02:09.481715 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h446n\" (UniqueName: \"kubernetes.io/projected/cfea8a68-b054-48f1-88a4-ee8fd7bce007-kube-api-access-h446n\") pod \"whisker-757f87bb85-qrkvc\" (UID: \"cfea8a68-b054-48f1-88a4-ee8fd7bce007\") " pod="calico-system/whisker-757f87bb85-qrkvc" Jan 19 12:02:09.481848 kubelet[2784]: I0119 12:02:09.481765 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cfea8a68-b054-48f1-88a4-ee8fd7bce007-whisker-backend-key-pair\") pod \"whisker-757f87bb85-qrkvc\" (UID: \"cfea8a68-b054-48f1-88a4-ee8fd7bce007\") " pod="calico-system/whisker-757f87bb85-qrkvc" Jan 19 12:02:09.506707 containerd[1598]: 2026-01-19 12:02:08.654 [INFO][4217] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:09.506707 containerd[1598]: 2026-01-19 12:02:08.766 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xx9qj-eth0 csi-node-driver- calico-system 7402f958-3527-492d-aaa2-32f171fd00ee 757 0 2026-01-19 12:01:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xx9qj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali479a902d8a5 [] [] }} ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-" Jan 19 12:02:09.506707 containerd[1598]: 2026-01-19 12:02:08.766 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.506707 containerd[1598]: 2026-01-19 12:02:09.071 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" HandleID="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Workload="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.075 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" HandleID="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Workload="localhost-k8s-csi--node--driver--xx9qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018eba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xx9qj", "timestamp":"2026-01-19 12:02:09.07128042 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.075 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.076 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.078 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.113 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" host="localhost" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.149 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.172 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.179 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.196 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:09.507372 containerd[1598]: 2026-01-19 12:02:09.198 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" host="localhost" Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.207 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.223 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" host="localhost" Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.260 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" host="localhost" Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.262 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" host="localhost" Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.262 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:09.507905 containerd[1598]: 2026-01-19 12:02:09.262 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" HandleID="k8s-pod-network.528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Workload="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.510981 containerd[1598]: 2026-01-19 12:02:09.347 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xx9qj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7402f958-3527-492d-aaa2-32f171fd00ee", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xx9qj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali479a902d8a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:09.511258 containerd[1598]: 2026-01-19 12:02:09.348 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.511258 containerd[1598]: 2026-01-19 12:02:09.348 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali479a902d8a5 ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.511258 containerd[1598]: 2026-01-19 12:02:09.405 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.511362 containerd[1598]: 2026-01-19 12:02:09.416 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xx9qj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7402f958-3527-492d-aaa2-32f171fd00ee", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b", Pod:"csi-node-driver-xx9qj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali479a902d8a5", MAC:"ce:94:53:f5:60:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:09.511613 containerd[1598]: 2026-01-19 12:02:09.473 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" Namespace="calico-system" Pod="csi-node-driver-xx9qj" WorkloadEndpoint="localhost-k8s-csi--node--driver--xx9qj-eth0" Jan 19 12:02:09.681579 systemd-networkd[1513]: cali0a032a99252: Link UP Jan 19 12:02:09.684529 systemd-networkd[1513]: cali0a032a99252: Gained carrier Jan 19 12:02:09.734388 containerd[1598]: time="2026-01-19T12:02:09.734207860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f87bb85-qrkvc,Uid:cfea8a68-b054-48f1-88a4-ee8fd7bce007,Namespace:calico-system,Attempt:0,}" Jan 19 12:02:09.758790 containerd[1598]: 2026-01-19 12:02:08.611 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:09.758790 containerd[1598]: 2026-01-19 12:02:08.790 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0 calico-kube-controllers-74c9b656d5- calico-system b7bafa95-2a0c-41ee-a149-7208583b6960 869 0 2026-01-19 12:01:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c9b656d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74c9b656d5-9mc5x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0a032a99252 [] [] }} ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-" Jan 19 12:02:09.758790 containerd[1598]: 2026-01-19 12:02:08.790 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.758790 containerd[1598]: 2026-01-19 12:02:09.072 [INFO][4281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" HandleID="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Workload="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.073 [INFO][4281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" HandleID="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Workload="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b4e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74c9b656d5-9mc5x", "timestamp":"2026-01-19 12:02:09.072170931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.073 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.262 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.262 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.433 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" host="localhost" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.473 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.505 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.527 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.534 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:09.759277 containerd[1598]: 2026-01-19 12:02:09.535 [INFO][4281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" host="localhost" Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.546 [INFO][4281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8 Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.576 [INFO][4281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" host="localhost" Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.628 [INFO][4281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" host="localhost" Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.628 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" host="localhost" Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.628 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:09.759779 containerd[1598]: 2026-01-19 12:02:09.628 [INFO][4281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" HandleID="k8s-pod-network.702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Workload="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.760328 containerd[1598]: 2026-01-19 12:02:09.671 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0", GenerateName:"calico-kube-controllers-74c9b656d5-", Namespace:"calico-system", SelfLink:"", UID:"b7bafa95-2a0c-41ee-a149-7208583b6960", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c9b656d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74c9b656d5-9mc5x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0a032a99252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:09.760580 containerd[1598]: 2026-01-19 12:02:09.671 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.760580 containerd[1598]: 2026-01-19 12:02:09.673 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a032a99252 ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.760580 containerd[1598]: 2026-01-19 12:02:09.683 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.760673 containerd[1598]: 2026-01-19 12:02:09.693 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0", GenerateName:"calico-kube-controllers-74c9b656d5-", Namespace:"calico-system", SelfLink:"", UID:"b7bafa95-2a0c-41ee-a149-7208583b6960", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c9b656d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8", Pod:"calico-kube-controllers-74c9b656d5-9mc5x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0a032a99252", MAC:"76:ef:8b:2e:f4:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:09.760853 containerd[1598]: 2026-01-19 12:02:09.745 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" Namespace="calico-system" Pod="calico-kube-controllers-74c9b656d5-9mc5x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c9b656d5--9mc5x-eth0" Jan 19 12:02:09.835589 containerd[1598]: time="2026-01-19T12:02:09.834989347Z" level=info msg="connecting to shim 528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b" address="unix:///run/containerd/s/52cb019e9df238541ea8ee5ba0be59d04c551cb64e26265eb432470e0ffd7c19" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:09.893436 containerd[1598]: time="2026-01-19T12:02:09.893325254Z" level=info msg="connecting to shim 702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8" address="unix:///run/containerd/s/0fe26f2f45c47fdbbc59306179bcf85b93d7419a16956def08892f12658f10f4" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:09.967932 systemd-networkd[1513]: cali0f1f9ecb8e7: Link UP Jan 19 12:02:09.968994 systemd-networkd[1513]: cali0f1f9ecb8e7: Gained carrier Jan 19 12:02:10.030801 systemd[1]: Started cri-containerd-528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b.scope - libcontainer container 528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b. Jan 19 12:02:10.051413 systemd[1]: Started cri-containerd-702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8.scope - libcontainer container 702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8. Jan 19 12:02:10.066994 containerd[1598]: 2026-01-19 12:02:09.423 [INFO][4317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:10.066994 containerd[1598]: 2026-01-19 12:02:09.479 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pds2j-eth0 coredns-674b8bbfcf- kube-system 668ec2ef-a9db-464e-8f8c-faf18a92d85c 880 0 2026-01-19 12:01:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pds2j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0f1f9ecb8e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-" Jan 19 12:02:10.066994 containerd[1598]: 2026-01-19 12:02:09.479 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.066994 containerd[1598]: 2026-01-19 12:02:09.733 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" HandleID="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Workload="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.737 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" HandleID="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Workload="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004318e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pds2j", "timestamp":"2026-01-19 12:02:09.733854976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.738 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.738 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.738 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.783 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" host="localhost" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.816 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.839 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.848 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.854 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.067654 containerd[1598]: 2026-01-19 12:02:09.854 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" host="localhost" Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.860 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1 Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.874 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" host="localhost" Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.902 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" host="localhost" Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.902 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" host="localhost" Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.902 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:10.067952 containerd[1598]: 2026-01-19 12:02:09.902 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" HandleID="k8s-pod-network.3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Workload="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.068574 containerd[1598]: 2026-01-19 12:02:09.943 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pds2j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"668ec2ef-a9db-464e-8f8c-faf18a92d85c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pds2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f1f9ecb8e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.068741 containerd[1598]: 2026-01-19 12:02:09.944 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.068741 containerd[1598]: 2026-01-19 12:02:09.944 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f1f9ecb8e7 ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.068741 containerd[1598]: 2026-01-19 12:02:09.991 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.068817 containerd[1598]: 2026-01-19 12:02:09.998 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pds2j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"668ec2ef-a9db-464e-8f8c-faf18a92d85c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1", Pod:"coredns-674b8bbfcf-pds2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f1f9ecb8e7", MAC:"ca:20:45:91:93:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.068817 containerd[1598]: 2026-01-19 12:02:10.044 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pds2j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pds2j-eth0" Jan 19 12:02:10.091914 systemd-networkd[1513]: cali5025fb616a7: Link UP Jan 19 12:02:10.095418 systemd-networkd[1513]: cali5025fb616a7: Gained carrier Jan 19 12:02:10.113000 audit: BPF prog-id=175 op=LOAD Jan 19 12:02:10.114000 audit: BPF prog-id=176 op=LOAD Jan 19 12:02:10.114000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=176 op=UNLOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=177 op=LOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=178 op=LOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=178 op=UNLOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=177 op=UNLOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.115000 audit: BPF prog-id=179 op=LOAD Jan 19 12:02:10.115000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4414 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532386430656335313564356166636465636235653865336264373333 Jan 19 12:02:10.145683 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.452 [INFO][4314] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.480 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hpwks-eth0 coredns-674b8bbfcf- kube-system afffad65-2533-4140-be9c-666164ec7581 873 0 2026-01-19 12:01:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hpwks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5025fb616a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.481 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.745 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" HandleID="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Workload="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.746 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" HandleID="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Workload="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139660), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hpwks", "timestamp":"2026-01-19 12:02:09.745898428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.746 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.905 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.905 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.941 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.955 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.981 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:09.992 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.000 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.000 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.013 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746 Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.047 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.063 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.063 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" host="localhost" Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.063 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:10.149676 containerd[1598]: 2026-01-19 12:02:10.063 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" HandleID="k8s-pod-network.11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Workload="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.079 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hpwks-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"afffad65-2533-4140-be9c-666164ec7581", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hpwks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5025fb616a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.080 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.080 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5025fb616a7 ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.096 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.098 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hpwks-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"afffad65-2533-4140-be9c-666164ec7581", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746", Pod:"coredns-674b8bbfcf-hpwks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5025fb616a7", MAC:"9e:9e:b8:e4:47:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.150418 containerd[1598]: 2026-01-19 12:02:10.139 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" Namespace="kube-system" Pod="coredns-674b8bbfcf-hpwks" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hpwks-eth0" Jan 19 12:02:10.165000 audit: BPF prog-id=180 op=LOAD Jan 19 12:02:10.166000 audit: BPF prog-id=181 op=LOAD Jan 19 12:02:10.166000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.166000 audit: BPF prog-id=181 op=UNLOAD Jan 19 12:02:10.166000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.166000 audit: BPF prog-id=182 op=LOAD Jan 19 12:02:10.166000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.166000 audit: BPF prog-id=183 op=LOAD Jan 19 12:02:10.166000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.166000 audit: BPF prog-id=183 op=UNLOAD Jan 19 12:02:10.166000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.167000 audit: BPF prog-id=182 op=UNLOAD Jan 19 12:02:10.167000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.167000 audit: BPF prog-id=184 op=LOAD Jan 19 12:02:10.167000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4441 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730326633656634643731636165366362613332656438333436333732 Jan 19 12:02:10.176512 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:10.180997 containerd[1598]: time="2026-01-19T12:02:10.180346568Z" level=info msg="connecting to shim 3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1" address="unix:///run/containerd/s/0f6f3eceb09613314aefd41ca4638b785aee214bb0b5661efbdf56960cbe13d9" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:10.227652 containerd[1598]: time="2026-01-19T12:02:10.226981924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,}" Jan 19 12:02:10.227652 containerd[1598]: time="2026-01-19T12:02:10.227308460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:02:10.268256 containerd[1598]: time="2026-01-19T12:02:10.267880658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9qj,Uid:7402f958-3527-492d-aaa2-32f171fd00ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"528d0ec515d5afcdecb5e8e3bd733c507289a7ab1313245e8c27e48538f4c10b\"" Jan 19 12:02:10.293150 containerd[1598]: time="2026-01-19T12:02:10.291647545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:02:10.346750 systemd[1]: Started cri-containerd-3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1.scope - libcontainer container 3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1. Jan 19 12:02:10.362217 containerd[1598]: time="2026-01-19T12:02:10.362003050Z" level=info msg="connecting to shim 11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746" address="unix:///run/containerd/s/088919139c7788fb7c8163e1efedb4eee4d345735879f7cd655e0fc135bac6e7" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:10.399667 containerd[1598]: time="2026-01-19T12:02:10.399634853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c9b656d5-9mc5x,Uid:b7bafa95-2a0c-41ee-a149-7208583b6960,Namespace:calico-system,Attempt:0,} returns sandbox id \"702f3ef4d71cae6cba32ed83463723ac0d6632c1a2c8c3a594d81e4e14233ed8\"" Jan 19 12:02:10.437000 audit: BPF prog-id=185 op=LOAD Jan 19 12:02:10.440770 containerd[1598]: time="2026-01-19T12:02:10.439676184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:10.439000 audit: BPF prog-id=186 op=LOAD Jan 19 12:02:10.439000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.439000 audit: BPF prog-id=186 op=UNLOAD Jan 19 12:02:10.439000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.442629 containerd[1598]: time="2026-01-19T12:02:10.442303727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:02:10.442629 containerd[1598]: time="2026-01-19T12:02:10.442392474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:10.442735 kubelet[2784]: E0119 12:02:10.442632 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:10.442735 kubelet[2784]: E0119 12:02:10.442692 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:10.441000 audit: BPF prog-id=187 op=LOAD Jan 19 12:02:10.441000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.441000 audit: BPF prog-id=188 op=LOAD Jan 19 12:02:10.441000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.441000 audit: BPF prog-id=188 op=UNLOAD Jan 19 12:02:10.441000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.441000 audit: BPF prog-id=187 op=UNLOAD Jan 19 12:02:10.441000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.442000 audit: BPF prog-id=189 op=LOAD Jan 19 12:02:10.442000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333366363862663430663661623662333265646534343539653138 Jan 19 12:02:10.445312 containerd[1598]: time="2026-01-19T12:02:10.443711450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:02:10.447923 kubelet[2784]: E0119 12:02:10.444739 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:10.449779 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:10.486215 systemd-networkd[1513]: calic272cdece13: Link UP Jan 19 12:02:10.492838 systemd-networkd[1513]: calic272cdece13: Gained carrier Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:09.954 [INFO][4407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:09.995 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--757f87bb85--qrkvc-eth0 whisker-757f87bb85- calico-system cfea8a68-b054-48f1-88a4-ee8fd7bce007 1012 0 2026-01-19 12:02:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:757f87bb85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-757f87bb85-qrkvc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic272cdece13 [] [] }} ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:09.999 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.146 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" HandleID="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Workload="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.147 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" HandleID="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Workload="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cb9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-757f87bb85-qrkvc", "timestamp":"2026-01-19 12:02:10.146919236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.147 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.147 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.147 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.183 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.208 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.249 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.259 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.270 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.283 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.301 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122 Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.364 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.395 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.400 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" host="localhost" Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.400 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:10.551960 containerd[1598]: 2026-01-19 12:02:10.400 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" HandleID="k8s-pod-network.14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Workload="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.439 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--757f87bb85--qrkvc-eth0", GenerateName:"whisker-757f87bb85-", Namespace:"calico-system", SelfLink:"", UID:"cfea8a68-b054-48f1-88a4-ee8fd7bce007", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 2, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f87bb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-757f87bb85-qrkvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic272cdece13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.442 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.443 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic272cdece13 ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.498 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.499 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--757f87bb85--qrkvc-eth0", GenerateName:"whisker-757f87bb85-", Namespace:"calico-system", SelfLink:"", UID:"cfea8a68-b054-48f1-88a4-ee8fd7bce007", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 2, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f87bb85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122", Pod:"whisker-757f87bb85-qrkvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic272cdece13", MAC:"26:cd:9e:c5:60:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:10.557342 containerd[1598]: 2026-01-19 12:02:10.537 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" Namespace="calico-system" Pod="whisker-757f87bb85-qrkvc" WorkloadEndpoint="localhost-k8s-whisker--757f87bb85--qrkvc-eth0" Jan 19 12:02:10.581688 systemd[1]: Started cri-containerd-11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746.scope - libcontainer container 11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746. Jan 19 12:02:10.592572 containerd[1598]: time="2026-01-19T12:02:10.592412311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:10.599907 containerd[1598]: time="2026-01-19T12:02:10.599874335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:02:10.603223 containerd[1598]: time="2026-01-19T12:02:10.601125546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:10.606996 kubelet[2784]: E0119 12:02:10.606153 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:10.609688 kubelet[2784]: E0119 12:02:10.607006 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:10.612892 kubelet[2784]: E0119 12:02:10.612605 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s6vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:10.613448 containerd[1598]: time="2026-01-19T12:02:10.613421829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:02:10.617811 kubelet[2784]: E0119 12:02:10.617430 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:10.618000 audit: BPF prog-id=190 op=LOAD Jan 19 12:02:10.631000 audit: BPF prog-id=191 op=LOAD Jan 19 12:02:10.631000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.631000 audit: BPF prog-id=191 op=UNLOAD Jan 19 12:02:10.631000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.635000 audit: BPF prog-id=192 op=LOAD Jan 19 12:02:10.635000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.636000 audit: BPF prog-id=193 op=LOAD Jan 19 12:02:10.636000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.637000 audit: BPF prog-id=193 op=UNLOAD Jan 19 12:02:10.637000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.639000 audit: BPF prog-id=192 op=UNLOAD Jan 19 12:02:10.639000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.639000 audit: BPF prog-id=194 op=LOAD Jan 19 12:02:10.639000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4574 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:10.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636262313536366535386134393865303965663365343231323162 Jan 19 12:02:10.648649 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:10.706154 containerd[1598]: time="2026-01-19T12:02:10.704946245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:10.710330 containerd[1598]: time="2026-01-19T12:02:10.707977987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pds2j,Uid:668ec2ef-a9db-464e-8f8c-faf18a92d85c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1\"" Jan 19 12:02:10.715957 containerd[1598]: time="2026-01-19T12:02:10.715909928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:02:10.720701 containerd[1598]: time="2026-01-19T12:02:10.716215758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:10.730599 kubelet[2784]: E0119 12:02:10.730398 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:10.730691 kubelet[2784]: E0119 12:02:10.730610 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:10.731003 kubelet[2784]: E0119 12:02:10.730763 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:10.739736 kubelet[2784]: E0119 12:02:10.739008 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:10.753159 kubelet[2784]: E0119 12:02:10.752620 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:10.767220 containerd[1598]: time="2026-01-19T12:02:10.766782071Z" level=info msg="CreateContainer within sandbox \"3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 12:02:10.789001 containerd[1598]: time="2026-01-19T12:02:10.788832360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hpwks,Uid:afffad65-2533-4140-be9c-666164ec7581,Namespace:kube-system,Attempt:0,} returns sandbox id \"11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746\"" Jan 19 12:02:10.803298 kubelet[2784]: E0119 12:02:10.803266 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:10.813711 containerd[1598]: time="2026-01-19T12:02:10.813352143Z" level=info msg="connecting to shim 14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122" address="unix:///run/containerd/s/db43b14027dca2d9ce6443d5c74f79fc029644b6e4e78ee548ced19174ca7247" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:10.841923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915472586.mount: Deactivated successfully. Jan 19 12:02:10.851976 containerd[1598]: time="2026-01-19T12:02:10.851389353Z" level=info msg="Container aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:02:10.856918 containerd[1598]: time="2026-01-19T12:02:10.856877285Z" level=info msg="CreateContainer within sandbox \"11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 12:02:10.942444 containerd[1598]: time="2026-01-19T12:02:10.941002955Z" level=info msg="CreateContainer within sandbox \"3933f68bf40f6ab6b32ede4459e18fa3b67f8e6fccc1a8f4c8357b0e2ea719a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80\"" Jan 19 12:02:10.950172 containerd[1598]: time="2026-01-19T12:02:10.948680197Z" level=info msg="StartContainer for \"aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80\"" Jan 19 12:02:10.978439 systemd[1]: Started cri-containerd-14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122.scope - libcontainer container 14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122. Jan 19 12:02:10.986732 containerd[1598]: time="2026-01-19T12:02:10.985699639Z" level=info msg="Container a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db: CDI devices from CRI Config.CDIDevices: []" Jan 19 12:02:11.006355 containerd[1598]: time="2026-01-19T12:02:11.005853865Z" level=info msg="connecting to shim aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80" address="unix:///run/containerd/s/0f6f3eceb09613314aefd41ca4638b785aee214bb0b5661efbdf56960cbe13d9" protocol=ttrpc version=3 Jan 19 12:02:11.026252 containerd[1598]: time="2026-01-19T12:02:11.025911698Z" level=info msg="CreateContainer within sandbox \"11cbb1566e58a498e09ef3e42121b60a6563027ed3521ed0f0fda2fa325fa746\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db\"" Jan 19 12:02:11.032794 containerd[1598]: time="2026-01-19T12:02:11.032701635Z" level=info msg="StartContainer for \"a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db\"" Jan 19 12:02:11.034620 containerd[1598]: time="2026-01-19T12:02:11.034436622Z" level=info msg="connecting to shim a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db" address="unix:///run/containerd/s/088919139c7788fb7c8163e1efedb4eee4d345735879f7cd655e0fc135bac6e7" protocol=ttrpc version=3 Jan 19 12:02:11.092891 systemd-networkd[1513]: cali479a902d8a5: Gained IPv6LL Jan 19 12:02:11.104000 audit: BPF prog-id=195 op=LOAD Jan 19 12:02:11.106000 audit: BPF prog-id=196 op=LOAD Jan 19 12:02:11.106000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.108000 audit: BPF prog-id=196 op=UNLOAD Jan 19 12:02:11.108000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.110000 audit: BPF prog-id=197 op=LOAD Jan 19 12:02:11.110000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.111000 audit: BPF prog-id=198 op=LOAD Jan 19 12:02:11.111000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.112000 audit: BPF prog-id=198 op=UNLOAD Jan 19 12:02:11.112000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.112000 audit: BPF prog-id=197 op=UNLOAD Jan 19 12:02:11.112000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.113000 audit: BPF prog-id=199 op=LOAD Jan 19 12:02:11.113000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=4760 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134626461373034346531653337613430383632626637396634383338 Jan 19 12:02:11.150938 systemd[1]: Started cri-containerd-aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80.scope - libcontainer container aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80. Jan 19 12:02:11.163627 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:11.165545 systemd-networkd[1513]: cali0c54cce35b0: Link UP Jan 19 12:02:11.171720 systemd-networkd[1513]: cali0c54cce35b0: Gained carrier Jan 19 12:02:11.260325 kubelet[2784]: E0119 12:02:11.259605 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:11.269552 systemd[1]: Started cri-containerd-a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db.scope - libcontainer container a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db. Jan 19 12:02:11.283397 systemd-networkd[1513]: cali5025fb616a7: Gained IPv6LL Jan 19 12:02:11.284000 audit: BPF prog-id=200 op=LOAD Jan 19 12:02:11.288000 audit: BPF prog-id=201 op=LOAD Jan 19 12:02:11.288000 audit[4808]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.288000 audit: BPF prog-id=201 op=UNLOAD Jan 19 12:02:11.288000 audit[4808]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.301218 kubelet[2784]: I0119 12:02:11.300944 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccc157f-015a-4ddd-9ea1-4caf6cdd3948" path="/var/lib/kubelet/pods/cccc157f-015a-4ddd-9ea1-4caf6cdd3948/volumes" Jan 19 12:02:11.309000 audit: BPF prog-id=202 op=LOAD Jan 19 12:02:11.309000 audit[4808]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.309000 audit: BPF prog-id=203 op=LOAD Jan 19 12:02:11.309000 audit[4808]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.309000 audit: BPF prog-id=203 op=UNLOAD Jan 19 12:02:11.309000 audit[4808]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.309000 audit: BPF prog-id=202 op=UNLOAD Jan 19 12:02:11.309000 audit[4808]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.309000 audit: BPF prog-id=204 op=LOAD Jan 19 12:02:11.309000 audit[4808]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4520 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161313965633566373737323734323937616638353037643037356137 Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.433 [INFO][4548] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.481 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--xg6zc-eth0 goldmane-666569f655- calico-system 3f3cc3a4-a155-47d4-9e99-0f5c3bb53331 881 0 2026-01-19 12:01:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-xg6zc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0c54cce35b0 [] [] }} ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.481 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.816 [INFO][4649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" HandleID="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Workload="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.819 [INFO][4649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" HandleID="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Workload="localhost-k8s-goldmane--666569f655--xg6zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-xg6zc", "timestamp":"2026-01-19 12:02:10.816937753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.819 [INFO][4649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.819 [INFO][4649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.819 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.858 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.885 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.912 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.922 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.948 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.948 [INFO][4649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:10.971 [INFO][4649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:11.056 [INFO][4649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:11.088 [INFO][4649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:11.088 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" host="localhost" Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:11.093 [INFO][4649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:11.344685 containerd[1598]: 2026-01-19 12:02:11.094 [INFO][4649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" HandleID="k8s-pod-network.25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Workload="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.111 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--xg6zc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-xg6zc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c54cce35b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.113 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.114 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c54cce35b0 ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.176 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.181 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--xg6zc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3f3cc3a4-a155-47d4-9e99-0f5c3bb53331", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae", Pod:"goldmane-666569f655-xg6zc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c54cce35b0", MAC:"aa:b5:fb:8b:17:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:11.348452 containerd[1598]: 2026-01-19 12:02:11.325 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" Namespace="calico-system" Pod="goldmane-666569f655-xg6zc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--xg6zc-eth0" Jan 19 12:02:11.368403 kubelet[2784]: E0119 12:02:11.367739 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:11.388000 audit: BPF prog-id=205 op=LOAD Jan 19 12:02:11.392653 containerd[1598]: time="2026-01-19T12:02:11.392424736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f87bb85-qrkvc,Uid:cfea8a68-b054-48f1-88a4-ee8fd7bce007,Namespace:calico-system,Attempt:0,} returns sandbox id \"14bda7044e1e37a40862bf79f48384855a96e9750a4fa6ba2a959aaa82ec7122\"" Jan 19 12:02:11.392000 audit: BPF prog-id=206 op=LOAD Jan 19 12:02:11.392000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.397000 audit: BPF prog-id=206 op=UNLOAD Jan 19 12:02:11.397000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.400000 audit: BPF prog-id=207 op=LOAD Jan 19 12:02:11.400000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.400000 audit: BPF prog-id=208 op=LOAD Jan 19 12:02:11.400000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.400000 audit: BPF prog-id=208 op=UNLOAD Jan 19 12:02:11.400000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.400000 audit: BPF prog-id=207 op=UNLOAD Jan 19 12:02:11.400000 audit[4812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.400000 audit: BPF prog-id=209 op=LOAD Jan 19 12:02:11.400000 audit[4812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4574 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134366364643061306335646531663266303265373933653266643963 Jan 19 12:02:11.403755 containerd[1598]: time="2026-01-19T12:02:11.403418533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:02:11.413232 systemd-networkd[1513]: cali0f1f9ecb8e7: Gained IPv6LL Jan 19 12:02:11.552454 systemd-networkd[1513]: cali3901656d1f6: Link UP Jan 19 12:02:11.561412 containerd[1598]: time="2026-01-19T12:02:11.560604003Z" level=info msg="connecting to shim 25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae" address="unix:///run/containerd/s/b2f1ea74afdbcadbc570b6c4dd8868495d3feee461a9449024230fc9ac9791a7" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:11.562816 systemd-networkd[1513]: cali3901656d1f6: Gained carrier Jan 19 12:02:11.591216 containerd[1598]: time="2026-01-19T12:02:11.564955832Z" level=info msg="StartContainer for \"aa19ec5f777274297af8507d075a7d655a0246fbcb42622e56a1dda12eb71e80\" returns successfully" Jan 19 12:02:11.613815 containerd[1598]: time="2026-01-19T12:02:11.613691415Z" level=info msg="StartContainer for \"a46cdd0a0c5de1f2f02e793e2fd9c03f5ddfe6cb1d346725590e91429d0758db\" returns successfully" Jan 19 12:02:11.644206 containerd[1598]: time="2026-01-19T12:02:11.638380161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:11.665713 containerd[1598]: time="2026-01-19T12:02:11.665301575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:02:11.665951 containerd[1598]: time="2026-01-19T12:02:11.665927229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:11.666367 kubelet[2784]: E0119 12:02:11.666324 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:11.666367 kubelet[2784]: E0119 12:02:11.666370 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:11.668990 kubelet[2784]: E0119 12:02:11.666466 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6d633e8fb4c48d3a2956485955c24a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:11.668301 systemd-networkd[1513]: cali0a032a99252: Gained IPv6LL Jan 19 12:02:11.679350 containerd[1598]: time="2026-01-19T12:02:11.678410683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.614 [INFO][4552] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.720 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0 calico-apiserver-75b8686f69- calico-apiserver de448f12-2894-4550-a3a5-5ddf27420cbb 884 0 2026-01-19 12:01:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75b8686f69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75b8686f69-mdc5b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3901656d1f6 [] [] }} ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.720 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.923 [INFO][4758] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" HandleID="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Workload="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.927 [INFO][4758] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" HandleID="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Workload="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e2c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75b8686f69-mdc5b", "timestamp":"2026-01-19 12:02:10.923670106 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:10.927 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.090 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.090 [INFO][4758] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.147 [INFO][4758] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.197 [INFO][4758] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.251 [INFO][4758] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.291 [INFO][4758] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.332 [INFO][4758] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.334 [INFO][4758] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.341 [INFO][4758] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537 Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.365 [INFO][4758] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.414 [INFO][4758] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.416 [INFO][4758] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" host="localhost" Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.428 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:11.688839 containerd[1598]: 2026-01-19 12:02:11.428 [INFO][4758] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" HandleID="k8s-pod-network.c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Workload="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.519 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0", GenerateName:"calico-apiserver-75b8686f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"de448f12-2894-4550-a3a5-5ddf27420cbb", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8686f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75b8686f69-mdc5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3901656d1f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.519 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.519 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3901656d1f6 ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.573 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.594 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0", GenerateName:"calico-apiserver-75b8686f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"de448f12-2894-4550-a3a5-5ddf27420cbb", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8686f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537", Pod:"calico-apiserver-75b8686f69-mdc5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3901656d1f6", MAC:"56:a9:d9:c3:76:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:11.695764 containerd[1598]: 2026-01-19 12:02:11.664 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-mdc5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--mdc5b-eth0" Jan 19 12:02:11.709582 systemd[1]: Started cri-containerd-25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae.scope - libcontainer container 25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae. Jan 19 12:02:11.774657 containerd[1598]: time="2026-01-19T12:02:11.774608768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:11.781865 containerd[1598]: time="2026-01-19T12:02:11.781813112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:02:11.783615 containerd[1598]: time="2026-01-19T12:02:11.783322435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:11.785582 kubelet[2784]: E0119 12:02:11.785546 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:11.785854 kubelet[2784]: E0119 12:02:11.785709 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:11.785854 kubelet[2784]: E0119 12:02:11.785819 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:11.786948 kubelet[2784]: E0119 12:02:11.786906 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:11.800442 containerd[1598]: time="2026-01-19T12:02:11.800347024Z" level=info msg="connecting to shim c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537" address="unix:///run/containerd/s/c7eb67677f16bd6d1e1ae7b9ae49ec3145b6587b18bfe8d8b95dc8d2f9a72630" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:11.863000 audit: BPF prog-id=210 op=LOAD Jan 19 12:02:11.866000 audit: BPF prog-id=211 op=LOAD Jan 19 12:02:11.866000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.866000 audit: BPF prog-id=211 op=UNLOAD Jan 19 12:02:11.866000 audit[4912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.866000 audit: BPF prog-id=212 op=LOAD Jan 19 12:02:11.866000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.867000 audit: BPF prog-id=213 op=LOAD Jan 19 12:02:11.867000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.867000 audit: BPF prog-id=213 op=UNLOAD Jan 19 12:02:11.867000 audit[4912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.867000 audit: BPF prog-id=212 op=UNLOAD Jan 19 12:02:11.867000 audit[4912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.867000 audit: BPF prog-id=214 op=LOAD Jan 19 12:02:11.867000 audit[4912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4891 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.867000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613639343233356661383961616361623931363537646233383635 Jan 19 12:02:11.872895 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:11.913795 systemd[1]: Started cri-containerd-c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537.scope - libcontainer container c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537. Jan 19 12:02:11.980000 audit: BPF prog-id=215 op=LOAD Jan 19 12:02:11.987725 kernel: kauditd_printk_skb: 181 callbacks suppressed Jan 19 12:02:11.987810 kernel: audit: type=1334 audit(1768824131.980:642): prog-id=215 op=LOAD Jan 19 12:02:12.003898 kernel: audit: type=1334 audit(1768824131.993:643): prog-id=216 op=LOAD Jan 19 12:02:11.993000 audit: BPF prog-id=216 op=LOAD Jan 19 12:02:11.993000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.040250 kernel: audit: type=1300 audit(1768824131.993:643): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:12.047766 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:12.072201 kernel: audit: type=1327 audit(1768824131.993:643): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.993000 audit: BPF prog-id=216 op=UNLOAD Jan 19 12:02:12.104203 kernel: audit: type=1334 audit(1768824131.993:644): prog-id=216 op=UNLOAD Jan 19 12:02:12.104319 kernel: audit: type=1300 audit(1768824131.993:644): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.993000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:12.130340 containerd[1598]: time="2026-01-19T12:02:12.127756539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xg6zc,Uid:3f3cc3a4-a155-47d4-9e99-0f5c3bb53331,Namespace:calico-system,Attempt:0,} returns sandbox id \"25a694235fa89aacab91657db3865c3644700ee32fa0636063c393649be819ae\"" Jan 19 12:02:12.138224 kernel: audit: type=1327 audit(1768824131.993:644): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:12.138294 kernel: audit: type=1334 audit(1768824131.996:645): prog-id=217 op=LOAD Jan 19 12:02:11.996000 audit: BPF prog-id=217 op=LOAD Jan 19 12:02:12.162278 kernel: audit: type=1300 audit(1768824131.996:645): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.996000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.162567 containerd[1598]: time="2026-01-19T12:02:12.143824099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:02:12.189849 kernel: audit: type=1327 audit(1768824131.996:645): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.997000 audit: BPF prog-id=218 op=LOAD Jan 19 12:02:11.997000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.997000 audit: BPF prog-id=218 op=UNLOAD Jan 19 12:02:11.997000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.997000 audit: BPF prog-id=217 op=UNLOAD Jan 19 12:02:11.997000 audit[4968]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:11.997000 audit: BPF prog-id=219 op=LOAD Jan 19 12:02:11.997000 audit[4968]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4951 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:11.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363033613562373737656532663431363966363530346132633938 Jan 19 12:02:12.198423 systemd-networkd[1513]: calic272cdece13: Gained IPv6LL Jan 19 12:02:12.199000 audit: BPF prog-id=220 op=LOAD Jan 19 12:02:12.199000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6d661ac0 a2=98 a3=1fffffffffffffff items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.199000 audit: BPF prog-id=220 op=UNLOAD Jan 19 12:02:12.199000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6d661a90 a3=0 items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.199000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.200000 audit: BPF prog-id=221 op=LOAD Jan 19 12:02:12.200000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6d6619a0 a2=94 a3=3 items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.201000 audit: BPF prog-id=221 op=UNLOAD Jan 19 12:02:12.201000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6d6619a0 a2=94 a3=3 items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.201000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.201000 audit: BPF prog-id=222 op=LOAD Jan 19 12:02:12.201000 audit[5019]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6d6619e0 a2=94 a3=7fff6d661bc0 items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.201000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.201000 audit: BPF prog-id=222 op=UNLOAD Jan 19 12:02:12.201000 audit[5019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6d6619e0 a2=94 a3=7fff6d661bc0 items=0 ppid=4640 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.201000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 12:02:12.207000 audit: BPF prog-id=223 op=LOAD Jan 19 12:02:12.207000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca13c27f0 a2=98 a3=3 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.207000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.208000 audit: BPF prog-id=223 op=UNLOAD Jan 19 12:02:12.208000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffca13c27c0 a3=0 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.208000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.209000 audit: BPF prog-id=224 op=LOAD Jan 19 12:02:12.209000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca13c25e0 a2=94 a3=54428f items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.209000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.209000 audit: BPF prog-id=224 op=UNLOAD Jan 19 12:02:12.209000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca13c25e0 a2=94 a3=54428f items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.209000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.210000 audit: BPF prog-id=225 op=LOAD Jan 19 12:02:12.210000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca13c2610 a2=94 a3=2 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.210000 audit: BPF prog-id=225 op=UNLOAD Jan 19 12:02:12.210000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca13c2610 a2=0 a3=2 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.210000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.280687 containerd[1598]: time="2026-01-19T12:02:12.280317338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:12.282320 containerd[1598]: time="2026-01-19T12:02:12.282284169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:02:12.282781 containerd[1598]: time="2026-01-19T12:02:12.282448799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:12.285883 kubelet[2784]: E0119 12:02:12.285770 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:12.285959 kubelet[2784]: E0119 12:02:12.285886 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:12.288847 kubelet[2784]: E0119 12:02:12.288000 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:12.290285 kubelet[2784]: E0119 12:02:12.290226 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:12.309378 containerd[1598]: time="2026-01-19T12:02:12.308932626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-mdc5b,Uid:de448f12-2894-4550-a3a5-5ddf27420cbb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c4603a5b777ee2f4169f6504a2c98e9f6d64e9e17ed5c0e3253b65cbf3aee537\"" Jan 19 12:02:12.312717 containerd[1598]: time="2026-01-19T12:02:12.312686256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:02:12.361735 kubelet[2784]: E0119 12:02:12.361621 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:12.376150 kubelet[2784]: E0119 12:02:12.375734 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:12.387258 kubelet[2784]: E0119 12:02:12.386870 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:12.395120 containerd[1598]: time="2026-01-19T12:02:12.394921216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:12.405455 containerd[1598]: time="2026-01-19T12:02:12.405290690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:02:12.405455 containerd[1598]: time="2026-01-19T12:02:12.405450761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:12.407193 kubelet[2784]: E0119 12:02:12.405704 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:12.412235 kubelet[2784]: I0119 12:02:12.411969 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hpwks" podStartSLOduration=72.411954796 podStartE2EDuration="1m12.411954796s" podCreationTimestamp="2026-01-19 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:02:12.405895572 +0000 UTC m=+77.513305964" watchObservedRunningTime="2026-01-19 12:02:12.411954796 +0000 UTC m=+77.519365168" Jan 19 12:02:12.414884 kubelet[2784]: E0119 12:02:12.414570 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:12.426641 kubelet[2784]: E0119 12:02:12.426395 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:12.426851 kubelet[2784]: E0119 12:02:12.426655 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:12.426851 kubelet[2784]: E0119 12:02:12.426689 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:12.426976 kubelet[2784]: E0119 12:02:12.426837 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8pgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:12.429795 kubelet[2784]: E0119 12:02:12.429582 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:12.467000 audit[5029]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:12.467000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7974b660 a2=0 a3=7ffd7974b64c items=0 ppid=2962 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:12.479000 audit[5029]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:12.479000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd7974b660 a2=0 a3=0 items=0 ppid=2962 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:12.543000 audit: BPF prog-id=226 op=LOAD Jan 19 12:02:12.543000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca13c24d0 a2=94 a3=1 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.543000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.543000 audit: BPF prog-id=226 op=UNLOAD Jan 19 12:02:12.543000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffca13c24d0 a2=94 a3=1 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.543000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.596000 audit: BPF prog-id=227 op=LOAD Jan 19 12:02:12.596000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca13c24c0 a2=94 a3=4 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.596000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.597000 audit: BPF prog-id=227 op=UNLOAD Jan 19 12:02:12.597000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffca13c24c0 a2=0 a3=4 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.597000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.597000 audit: BPF prog-id=228 op=LOAD Jan 19 12:02:12.597000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca13c2320 a2=94 a3=5 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.597000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.600000 audit: BPF prog-id=228 op=UNLOAD Jan 19 12:02:12.600000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffca13c2320 a2=0 a3=5 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.600000 audit: BPF prog-id=229 op=LOAD Jan 19 12:02:12.600000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca13c2540 a2=94 a3=6 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.600000 audit: BPF prog-id=229 op=UNLOAD Jan 19 12:02:12.600000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffca13c2540 a2=0 a3=6 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.601000 audit: BPF prog-id=230 op=LOAD Jan 19 12:02:12.601000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffca13c1cf0 a2=94 a3=88 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.601000 audit: BPF prog-id=231 op=LOAD Jan 19 12:02:12.601000 audit[5020]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffca13c1b70 a2=94 a3=2 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.602000 audit: BPF prog-id=231 op=UNLOAD Jan 19 12:02:12.602000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffca13c1ba0 a2=0 a3=7ffca13c1ca0 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.603000 audit: BPF prog-id=230 op=UNLOAD Jan 19 12:02:12.603000 audit[5020]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=15c73d10 a2=0 a3=505088e59ff75579 items=0 ppid=4640 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 12:02:12.611000 audit[5032]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=5032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:12.611000 audit[5032]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6c0ff2d0 a2=0 a3=7fff6c0ff2bc items=0 ppid=2962 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:12.617000 audit[5032]: NETFILTER_CFG table=nat:124 family=2 entries=35 op=nft_register_chain pid=5032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:12.617000 audit[5032]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff6c0ff2d0 a2=0 a3=7fff6c0ff2bc items=0 ppid=2962 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:12.671680 kubelet[2784]: I0119 12:02:12.671177 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pds2j" podStartSLOduration=72.671160925 podStartE2EDuration="1m12.671160925s" podCreationTimestamp="2026-01-19 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 12:02:12.669132815 +0000 UTC m=+77.776543197" watchObservedRunningTime="2026-01-19 12:02:12.671160925 +0000 UTC m=+77.778571307" Jan 19 12:02:12.682000 audit: BPF prog-id=232 op=LOAD Jan 19 12:02:12.682000 audit[5036]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea76e0c90 a2=98 a3=1999999999999999 items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.682000 audit: BPF prog-id=232 op=UNLOAD Jan 19 12:02:12.682000 audit[5036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffea76e0c60 a3=0 items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.682000 audit: BPF prog-id=233 op=LOAD Jan 19 12:02:12.682000 audit[5036]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea76e0b70 a2=94 a3=ffff items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.682000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.683000 audit: BPF prog-id=233 op=UNLOAD Jan 19 12:02:12.683000 audit[5036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea76e0b70 a2=94 a3=ffff items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.683000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.683000 audit: BPF prog-id=234 op=LOAD Jan 19 12:02:12.683000 audit[5036]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea76e0bb0 a2=94 a3=7ffea76e0d90 items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.683000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.683000 audit: BPF prog-id=234 op=UNLOAD Jan 19 12:02:12.683000 audit[5036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea76e0bb0 a2=94 a3=7ffea76e0d90 items=0 ppid=4640 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.683000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 12:02:12.691724 systemd-networkd[1513]: cali0c54cce35b0: Gained IPv6LL Jan 19 12:02:12.903264 systemd-networkd[1513]: vxlan.calico: Link UP Jan 19 12:02:12.903278 systemd-networkd[1513]: vxlan.calico: Gained carrier Jan 19 12:02:12.979000 audit: BPF prog-id=235 op=LOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd75adb020 a2=98 a3=0 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=235 op=UNLOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd75adaff0 a3=0 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=236 op=LOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd75adae30 a2=94 a3=54428f items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=236 op=UNLOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd75adae30 a2=94 a3=54428f items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=237 op=LOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd75adae60 a2=94 a3=2 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=237 op=UNLOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd75adae60 a2=0 a3=2 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=238 op=LOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd75adac10 a2=94 a3=4 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=238 op=UNLOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd75adac10 a2=94 a3=4 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=239 op=LOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd75adad10 a2=94 a3=7ffd75adae90 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.979000 audit: BPF prog-id=239 op=UNLOAD Jan 19 12:02:12.979000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd75adad10 a2=0 a3=7ffd75adae90 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.979000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.981000 audit: BPF prog-id=240 op=LOAD Jan 19 12:02:12.981000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd75ada440 a2=94 a3=2 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.981000 audit: BPF prog-id=240 op=UNLOAD Jan 19 12:02:12.981000 audit[5061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd75ada440 a2=0 a3=2 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:12.981000 audit: BPF prog-id=241 op=LOAD Jan 19 12:02:12.981000 audit[5061]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd75ada540 a2=94 a3=30 items=0 ppid=4640 pid=5061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:12.981000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 12:02:13.004000 audit: BPF prog-id=242 op=LOAD Jan 19 12:02:13.004000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8d040130 a2=98 a3=0 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.004000 audit: BPF prog-id=242 op=UNLOAD Jan 19 12:02:13.004000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8d040100 a3=0 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.005000 audit: BPF prog-id=243 op=LOAD Jan 19 12:02:13.005000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d03ff20 a2=94 a3=54428f items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.005000 audit: BPF prog-id=243 op=UNLOAD Jan 19 12:02:13.005000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d03ff20 a2=94 a3=54428f items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.005000 audit: BPF prog-id=244 op=LOAD Jan 19 12:02:13.005000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d03ff50 a2=94 a3=2 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.005000 audit: BPF prog-id=244 op=UNLOAD Jan 19 12:02:13.005000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d03ff50 a2=0 a3=2 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.231000 audit: BPF prog-id=245 op=LOAD Jan 19 12:02:13.231000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8d03fe10 a2=94 a3=1 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.231000 audit: BPF prog-id=245 op=UNLOAD Jan 19 12:02:13.231000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8d03fe10 a2=94 a3=1 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.231000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=246 op=LOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d03fe00 a2=94 a3=4 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=246 op=UNLOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d03fe00 a2=0 a3=4 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=247 op=LOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8d03fc60 a2=94 a3=5 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=247 op=UNLOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd8d03fc60 a2=0 a3=5 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=248 op=LOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d03fe80 a2=94 a3=6 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.244000 audit: BPF prog-id=248 op=UNLOAD Jan 19 12:02:13.244000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8d03fe80 a2=0 a3=6 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.244000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.245000 audit: BPF prog-id=249 op=LOAD Jan 19 12:02:13.245000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8d03f630 a2=94 a3=88 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.245000 audit: BPF prog-id=250 op=LOAD Jan 19 12:02:13.245000 audit[5064]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8d03f4b0 a2=94 a3=2 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.245000 audit: BPF prog-id=250 op=UNLOAD Jan 19 12:02:13.245000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8d03f4e0 a2=0 a3=7ffd8d03f5e0 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.246000 audit: BPF prog-id=249 op=UNLOAD Jan 19 12:02:13.246000 audit[5064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2ef26d10 a2=0 a3=8146cfd84fac9c25 items=0 ppid=4640 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.246000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 12:02:13.269000 audit: BPF prog-id=241 op=UNLOAD Jan 19 12:02:13.269000 audit[4640]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001258000 a2=0 a3=0 items=0 ppid=4614 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.269000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 19 12:02:13.270728 systemd-networkd[1513]: cali3901656d1f6: Gained IPv6LL Jan 19 12:02:13.400942 kubelet[2784]: E0119 12:02:13.399715 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:13.400942 kubelet[2784]: E0119 12:02:13.399780 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:13.400942 kubelet[2784]: E0119 12:02:13.400640 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:13.402471 kubelet[2784]: E0119 12:02:13.401833 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:13.406799 kubelet[2784]: E0119 12:02:13.406682 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:13.449000 audit[5098]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:02:13.449000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffeff08a540 a2=0 a3=7ffeff08a52c items=0 ppid=4640 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:02:13.484000 audit[5102]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:13.484000 audit[5102]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc348a7740 a2=0 a3=7ffc348a772c items=0 ppid=2962 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:13.494000 audit[5100]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5100 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:02:13.494000 audit[5100]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc96030880 a2=0 a3=7ffc9603086c items=0 ppid=4640 pid=5100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.494000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:02:13.503000 audit[5102]: NETFILTER_CFG table=nat:128 family=2 entries=56 op=nft_register_chain pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:13.503000 audit[5102]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc348a7740 a2=0 a3=7ffc348a772c items=0 ppid=2962 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:13.521000 audit[5097]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=5097 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:02:13.521000 audit[5097]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe0adf8fe0 a2=0 a3=7ffe0adf8fcc items=0 ppid=4640 pid=5097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:02:13.552000 audit[5104]: NETFILTER_CFG table=filter:130 family=2 entries=292 op=nft_register_chain pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:02:13.552000 audit[5104]: SYSCALL arch=c000003e syscall=46 success=yes exit=171632 a0=3 a1=7ffdc618ee20 a2=0 a3=7ffdc618ee0c items=0 ppid=4640 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:13.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:02:14.355626 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Jan 19 12:02:14.401794 kubelet[2784]: E0119 12:02:14.401618 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:19.223369 kubelet[2784]: E0119 12:02:19.222922 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:21.817183 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:41578.service - OpenSSH per-connection server daemon (10.0.0.1:41578). Jan 19 12:02:21.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:21.825571 kernel: kauditd_printk_skb: 228 callbacks suppressed Jan 19 12:02:21.825869 kernel: audit: type=1130 audit(1768824141.816:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:22.047000 audit[5129]: USER_ACCT pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.050952 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 41578 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:22.053112 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:22.064203 systemd-logind[1580]: New session 9 of user core. Jan 19 12:02:22.049000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.102225 kernel: audit: type=1101 audit(1768824142.047:723): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.102354 kernel: audit: type=1103 audit(1768824142.049:724): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.103735 kernel: audit: type=1006 audit(1768824142.049:725): pid=5129 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 19 12:02:22.120546 kernel: audit: type=1300 audit(1768824142.049:725): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf8c8c20 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.049000 audit[5129]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf8c8c20 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.049000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:22.154277 kernel: audit: type=1327 audit(1768824142.049:725): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:22.155611 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 19 12:02:22.164000 audit[5129]: USER_START pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.167000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.215763 kernel: audit: type=1105 audit(1768824142.164:726): pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.215869 kernel: audit: type=1103 audit(1768824142.167:727): pid=5133 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.220993 containerd[1598]: time="2026-01-19T12:02:22.220737503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,}" Jan 19 12:02:22.477939 sshd[5133]: Connection closed by 10.0.0.1 port 41578 Jan 19 12:02:22.480732 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:22.486000 audit[5129]: USER_END pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.490858 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:41578.service: Deactivated successfully. Jan 19 12:02:22.497382 systemd[1]: session-9.scope: Deactivated successfully. Jan 19 12:02:22.501995 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 19 12:02:22.503918 systemd-logind[1580]: Removed session 9. Jan 19 12:02:22.487000 audit[5129]: CRED_DISP pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.541613 kernel: audit: type=1106 audit(1768824142.486:728): pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.541854 kernel: audit: type=1104 audit(1768824142.487:729): pid=5129 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:22.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:22.617806 systemd-networkd[1513]: cali6b04bf43cb8: Link UP Jan 19 12:02:22.619999 systemd-networkd[1513]: cali6b04bf43cb8: Gained carrier Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.382 [INFO][5142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0 calico-apiserver-75b8686f69- calico-apiserver bd84ecc0-0a49-4305-b46e-8f992897ba53 876 0 2026-01-19 12:01:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75b8686f69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75b8686f69-vv49f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b04bf43cb8 [] [] }} ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.384 [INFO][5142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.482 [INFO][5160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" HandleID="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Workload="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.482 [INFO][5160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" HandleID="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Workload="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-75b8686f69-vv49f", "timestamp":"2026-01-19 12:02:22.482194031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.482 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.482 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.482 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.544 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.558 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.570 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.574 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.579 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.579 [INFO][5160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.584 [INFO][5160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.592 [INFO][5160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.604 [INFO][5160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.604 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" host="localhost" Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.604 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 12:02:22.666697 containerd[1598]: 2026-01-19 12:02:22.604 [INFO][5160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" HandleID="k8s-pod-network.5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Workload="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.611 [INFO][5142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0", GenerateName:"calico-apiserver-75b8686f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd84ecc0-0a49-4305-b46e-8f992897ba53", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8686f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75b8686f69-vv49f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b04bf43cb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.611 [INFO][5142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.611 [INFO][5142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b04bf43cb8 ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.620 [INFO][5142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.631 [INFO][5142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0", GenerateName:"calico-apiserver-75b8686f69-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd84ecc0-0a49-4305-b46e-8f992897ba53", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 12, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75b8686f69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df", Pod:"calico-apiserver-75b8686f69-vv49f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b04bf43cb8", MAC:"ee:4d:33:44:99:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 12:02:22.667941 containerd[1598]: 2026-01-19 12:02:22.659 [INFO][5142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" Namespace="calico-apiserver" Pod="calico-apiserver-75b8686f69-vv49f" WorkloadEndpoint="localhost-k8s-calico--apiserver--75b8686f69--vv49f-eth0" Jan 19 12:02:22.698000 audit[5192]: NETFILTER_CFG table=filter:131 family=2 entries=53 op=nft_register_chain pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 12:02:22.698000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7ffdaff28330 a2=0 a3=7ffdaff2831c items=0 ppid=4640 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.698000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 12:02:22.752506 containerd[1598]: time="2026-01-19T12:02:22.752377189Z" level=info msg="connecting to shim 5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df" address="unix:///run/containerd/s/3d5852911a489e05ca422c45069fb3d95400a77a648777f4ab27219620e0a32e" namespace=k8s.io protocol=ttrpc version=3 Jan 19 12:02:22.840752 systemd[1]: Started cri-containerd-5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df.scope - libcontainer container 5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df. Jan 19 12:02:22.881000 audit: BPF prog-id=251 op=LOAD Jan 19 12:02:22.882000 audit: BPF prog-id=252 op=LOAD Jan 19 12:02:22.882000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.882000 audit: BPF prog-id=252 op=UNLOAD Jan 19 12:02:22.882000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.883000 audit: BPF prog-id=253 op=LOAD Jan 19 12:02:22.883000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.883000 audit: BPF prog-id=254 op=LOAD Jan 19 12:02:22.883000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.884000 audit: BPF prog-id=254 op=UNLOAD Jan 19 12:02:22.884000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.884000 audit: BPF prog-id=253 op=UNLOAD Jan 19 12:02:22.884000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.884000 audit: BPF prog-id=255 op=LOAD Jan 19 12:02:22.884000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5202 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:22.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564623536633830353662303935656138333037656435393533643665 Jan 19 12:02:22.890104 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 19 12:02:22.982260 containerd[1598]: time="2026-01-19T12:02:22.982159327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75b8686f69-vv49f,Uid:bd84ecc0-0a49-4305-b46e-8f992897ba53,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5db56c8056b095ea8307ed5953d6ed69b778fdb9cca30625992b4a0ad448c9df\"" Jan 19 12:02:22.988486 containerd[1598]: time="2026-01-19T12:02:22.988244132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:02:23.097795 containerd[1598]: time="2026-01-19T12:02:23.095929349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:23.099789 containerd[1598]: time="2026-01-19T12:02:23.099548205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:02:23.099904 containerd[1598]: time="2026-01-19T12:02:23.099715889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:23.100608 kubelet[2784]: E0119 12:02:23.100359 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:23.100608 kubelet[2784]: E0119 12:02:23.100468 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:23.101227 kubelet[2784]: E0119 12:02:23.100755 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcr55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:23.102277 kubelet[2784]: E0119 12:02:23.102251 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:23.451528 kubelet[2784]: E0119 12:02:23.450963 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:23.509000 audit[5241]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:23.509000 audit[5241]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc73c101d0 a2=0 a3=7ffc73c101bc items=0 ppid=2962 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:23.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:23.543000 audit[5241]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:02:23.543000 audit[5241]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc73c101d0 a2=0 a3=7ffc73c101bc items=0 ppid=2962 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:23.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:02:23.955598 systemd-networkd[1513]: cali6b04bf43cb8: Gained IPv6LL Jan 19 12:02:24.454478 kubelet[2784]: E0119 12:02:24.454317 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:25.233863 containerd[1598]: time="2026-01-19T12:02:25.230919664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:02:25.338358 containerd[1598]: time="2026-01-19T12:02:25.338226758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:25.342505 containerd[1598]: time="2026-01-19T12:02:25.342309776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:02:25.342505 containerd[1598]: time="2026-01-19T12:02:25.342379029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:25.342949 kubelet[2784]: E0119 12:02:25.342779 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:25.342949 kubelet[2784]: E0119 12:02:25.342903 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:25.345240 kubelet[2784]: E0119 12:02:25.345160 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:25.350222 containerd[1598]: time="2026-01-19T12:02:25.349784871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:02:25.430635 containerd[1598]: time="2026-01-19T12:02:25.429906747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:25.433576 containerd[1598]: time="2026-01-19T12:02:25.433259382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:02:25.433576 containerd[1598]: time="2026-01-19T12:02:25.433372177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:25.434261 kubelet[2784]: E0119 12:02:25.433872 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:25.434261 kubelet[2784]: E0119 12:02:25.433988 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:25.435312 kubelet[2784]: E0119 12:02:25.434344 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:25.440431 kubelet[2784]: E0119 12:02:25.439950 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:26.239806 containerd[1598]: time="2026-01-19T12:02:26.226218259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:02:26.351206 containerd[1598]: time="2026-01-19T12:02:26.350475244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:26.353917 containerd[1598]: time="2026-01-19T12:02:26.353602748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:02:26.353917 containerd[1598]: time="2026-01-19T12:02:26.353828741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:26.354857 kubelet[2784]: E0119 12:02:26.354204 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:26.354857 kubelet[2784]: E0119 12:02:26.354258 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:26.354857 kubelet[2784]: E0119 12:02:26.354388 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s6vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:26.356187 kubelet[2784]: E0119 12:02:26.356073 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:27.224453 containerd[1598]: time="2026-01-19T12:02:27.224295820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:02:27.346638 containerd[1598]: time="2026-01-19T12:02:27.346258761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:27.349559 containerd[1598]: time="2026-01-19T12:02:27.349396794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:02:27.349559 containerd[1598]: time="2026-01-19T12:02:27.349481924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:27.349885 kubelet[2784]: E0119 12:02:27.349841 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:27.349950 kubelet[2784]: E0119 12:02:27.349894 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:27.350237 kubelet[2784]: E0119 12:02:27.350156 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6d633e8fb4c48d3a2956485955c24a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:27.354996 containerd[1598]: time="2026-01-19T12:02:27.354607626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:02:27.436237 containerd[1598]: time="2026-01-19T12:02:27.435835525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:27.440483 containerd[1598]: time="2026-01-19T12:02:27.440132498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:02:27.440483 containerd[1598]: time="2026-01-19T12:02:27.440228559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:27.442545 kubelet[2784]: E0119 12:02:27.441177 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:27.442545 kubelet[2784]: E0119 12:02:27.441239 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:27.442545 kubelet[2784]: E0119 12:02:27.441384 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:27.443514 kubelet[2784]: E0119 12:02:27.443244 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:27.509834 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:37188.service - OpenSSH per-connection server daemon (10.0.0.1:37188). Jan 19 12:02:27.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:37188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:27.537359 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 19 12:02:27.537532 kernel: audit: type=1130 audit(1768824147.508:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:37188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:27.803000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.807293 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 37188 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:27.810616 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:27.825978 systemd-logind[1580]: New session 10 of user core. Jan 19 12:02:27.840192 kernel: audit: type=1101 audit(1768824147.803:743): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.840314 kernel: audit: type=1103 audit(1768824147.805:744): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.805000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.876951 kernel: audit: type=1006 audit(1768824147.805:745): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 19 12:02:27.877244 kernel: audit: type=1300 audit(1768824147.805:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9681f470 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:27.805000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9681f470 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:27.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:27.917100 kernel: audit: type=1327 audit(1768824147.805:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:27.918838 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 19 12:02:27.924000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.930000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.977631 kernel: audit: type=1105 audit(1768824147.924:746): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:27.977868 kernel: audit: type=1103 audit(1768824147.930:747): pid=5254 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:28.177626 sshd[5254]: Connection closed by 10.0.0.1 port 37188 Jan 19 12:02:28.178278 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:28.185000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:28.191682 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:37188.service: Deactivated successfully. Jan 19 12:02:28.195786 systemd[1]: session-10.scope: Deactivated successfully. Jan 19 12:02:28.198999 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 19 12:02:28.202242 systemd-logind[1580]: Removed session 10. Jan 19 12:02:28.221321 kernel: audit: type=1106 audit(1768824148.185:748): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:28.185000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:28.228779 containerd[1598]: time="2026-01-19T12:02:28.228352558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:02:28.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:37188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:28.247299 kernel: audit: type=1104 audit(1768824148.185:749): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:28.334201 containerd[1598]: time="2026-01-19T12:02:28.333839729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:28.337284 containerd[1598]: time="2026-01-19T12:02:28.337223951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:02:28.337422 containerd[1598]: time="2026-01-19T12:02:28.337347734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:28.338496 kubelet[2784]: E0119 12:02:28.338395 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:28.338496 kubelet[2784]: E0119 12:02:28.338454 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:28.338667 kubelet[2784]: E0119 12:02:28.338573 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8pgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:28.340203 kubelet[2784]: E0119 12:02:28.339915 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:29.222782 containerd[1598]: time="2026-01-19T12:02:29.222628680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:02:29.318406 containerd[1598]: time="2026-01-19T12:02:29.317865312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:29.320873 containerd[1598]: time="2026-01-19T12:02:29.320621982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:02:29.320982 containerd[1598]: time="2026-01-19T12:02:29.320793973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:29.321898 kubelet[2784]: E0119 12:02:29.321576 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:29.321898 kubelet[2784]: E0119 12:02:29.321631 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:29.322375 kubelet[2784]: E0119 12:02:29.321878 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:29.324531 kubelet[2784]: E0119 12:02:29.324279 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:30.221618 kubelet[2784]: E0119 12:02:30.221336 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:33.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:33.224909 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:36504.service - OpenSSH per-connection server daemon (10.0.0.1:36504). Jan 19 12:02:33.237244 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:02:33.237322 kernel: audit: type=1130 audit(1768824153.224:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:33.455989 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 36504 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:33.453000 audit[5273]: USER_ACCT pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.463448 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:33.502875 systemd-logind[1580]: New session 11 of user core. Jan 19 12:02:33.531919 kernel: audit: type=1101 audit(1768824153.453:752): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.532246 kernel: audit: type=1103 audit(1768824153.458:753): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.458000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.586522 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 19 12:02:33.458000 audit[5273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec7995a70 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:33.696516 kernel: audit: type=1006 audit(1768824153.458:754): pid=5273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 19 12:02:33.696642 kernel: audit: type=1300 audit(1768824153.458:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec7995a70 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:33.696678 kernel: audit: type=1327 audit(1768824153.458:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:33.458000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:33.633000 audit[5273]: USER_START pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.741542 kernel: audit: type=1105 audit(1768824153.633:755): pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.743987 kernel: audit: type=1103 audit(1768824153.655:756): pid=5282 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:33.655000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:34.032269 sshd[5282]: Connection closed by 10.0.0.1 port 36504 Jan 19 12:02:34.033262 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:34.033000 audit[5273]: USER_END pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:34.041355 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 19 12:02:34.043858 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:36504.service: Deactivated successfully. Jan 19 12:02:34.050195 systemd[1]: session-11.scope: Deactivated successfully. Jan 19 12:02:34.055707 systemd-logind[1580]: Removed session 11. Jan 19 12:02:34.074142 kernel: audit: type=1106 audit(1768824154.033:757): pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:34.074263 kernel: audit: type=1104 audit(1768824154.034:758): pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:34.034000 audit[5273]: CRED_DISP pid=5273 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:34.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:37.242702 containerd[1598]: time="2026-01-19T12:02:37.242635247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:02:37.331215 containerd[1598]: time="2026-01-19T12:02:37.328756655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:37.339000 containerd[1598]: time="2026-01-19T12:02:37.338945773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:02:37.339751 containerd[1598]: time="2026-01-19T12:02:37.339211471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:37.339904 kubelet[2784]: E0119 12:02:37.339517 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:37.339904 kubelet[2784]: E0119 12:02:37.339569 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:37.339904 kubelet[2784]: E0119 12:02:37.339716 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcr55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:37.342337 kubelet[2784]: E0119 12:02:37.341986 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:39.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:39.057619 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:36512.service - OpenSSH per-connection server daemon (10.0.0.1:36512). Jan 19 12:02:39.064614 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:02:39.066247 kernel: audit: type=1130 audit(1768824159.056:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:39.227232 kubelet[2784]: E0119 12:02:39.226674 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:39.263000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.344215 kernel: audit: type=1101 audit(1768824159.263:761): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.277960 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:39.346671 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 36512 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:39.274000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.377954 systemd-logind[1580]: New session 12 of user core. Jan 19 12:02:39.415276 kernel: audit: type=1103 audit(1768824159.274:762): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.416418 kernel: audit: type=1006 audit(1768824159.274:763): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 19 12:02:39.416460 kernel: audit: type=1300 audit(1768824159.274:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc70c2fc0 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:39.274000 audit[5297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc70c2fc0 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:39.417684 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 19 12:02:39.274000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:39.507263 kernel: audit: type=1327 audit(1768824159.274:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:39.442000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.470000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.573313 kernel: audit: type=1105 audit(1768824159.442:764): pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.573441 kernel: audit: type=1103 audit(1768824159.470:765): pid=5325 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.773317 kubelet[2784]: E0119 12:02:39.773209 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:39.841586 sshd[5325]: Connection closed by 10.0.0.1 port 36512 Jan 19 12:02:39.843402 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:39.856000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.861657 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 19 12:02:39.864334 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:36512.service: Deactivated successfully. Jan 19 12:02:39.869008 systemd[1]: session-12.scope: Deactivated successfully. Jan 19 12:02:39.873774 systemd-logind[1580]: Removed session 12. Jan 19 12:02:39.856000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.934646 kernel: audit: type=1106 audit(1768824159.856:766): pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.934744 kernel: audit: type=1104 audit(1768824159.856:767): pid=5297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:39.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:40.234382 kubelet[2784]: E0119 12:02:40.234226 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:40.234382 kubelet[2784]: E0119 12:02:40.234255 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:41.234202 kubelet[2784]: E0119 12:02:41.232186 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:43.224814 kubelet[2784]: E0119 12:02:43.224692 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:44.868367 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:38436.service - OpenSSH per-connection server daemon (10.0.0.1:38436). Jan 19 12:02:44.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:38436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:44.876336 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:02:44.876410 kernel: audit: type=1130 audit(1768824164.867:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:38436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:45.039000 audit[5347]: USER_ACCT pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.041769 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 38436 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:45.045160 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:45.062605 systemd-logind[1580]: New session 13 of user core. Jan 19 12:02:45.041000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.104806 kernel: audit: type=1101 audit(1768824165.039:770): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.104988 kernel: audit: type=1103 audit(1768824165.041:771): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.105184 kernel: audit: type=1006 audit(1768824165.041:772): pid=5347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 19 12:02:45.124803 kernel: audit: type=1300 audit(1768824165.041:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31c0b2f0 a2=3 a3=0 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:45.041000 audit[5347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff31c0b2f0 a2=3 a3=0 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:45.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:45.162468 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 19 12:02:45.170000 audit[5347]: USER_START pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.217177 kernel: audit: type=1327 audit(1768824165.041:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:45.217624 kernel: audit: type=1105 audit(1768824165.170:773): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.217670 kernel: audit: type=1103 audit(1768824165.174:774): pid=5351 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.174000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.415462 sshd[5351]: Connection closed by 10.0.0.1 port 38436 Jan 19 12:02:45.415353 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:45.417000 audit[5347]: USER_END pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.457348 kernel: audit: type=1106 audit(1768824165.417:775): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.457462 kernel: audit: type=1104 audit(1768824165.417:776): pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.417000 audit[5347]: CRED_DISP pid=5347 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.495595 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:38436.service: Deactivated successfully. Jan 19 12:02:45.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:38436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:45.499502 systemd[1]: session-13.scope: Deactivated successfully. Jan 19 12:02:45.501992 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 19 12:02:45.507550 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:38448.service - OpenSSH per-connection server daemon (10.0.0.1:38448). Jan 19 12:02:45.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:38448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:45.509706 systemd-logind[1580]: Removed session 13. Jan 19 12:02:45.619800 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 38448 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:45.617000 audit[5366]: USER_ACCT pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.620000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.620000 audit[5366]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2427fd30 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:45.620000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:45.624307 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:45.641355 systemd-logind[1580]: New session 14 of user core. Jan 19 12:02:45.657642 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 19 12:02:45.663000 audit[5366]: USER_START pid=5366 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:45.668000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.009351 sshd[5370]: Connection closed by 10.0.0.1 port 38448 Jan 19 12:02:46.018711 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:46.020000 audit[5366]: USER_END pid=5366 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.021000 audit[5366]: CRED_DISP pid=5366 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.037601 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:38448.service: Deactivated successfully. Jan 19 12:02:46.038387 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 19 12:02:46.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:38448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:46.046300 systemd[1]: session-14.scope: Deactivated successfully. Jan 19 12:02:46.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:38456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:46.060414 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:38456.service - OpenSSH per-connection server daemon (10.0.0.1:38456). Jan 19 12:02:46.066177 systemd-logind[1580]: Removed session 14. Jan 19 12:02:46.223000 audit[5382]: USER_ACCT pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.226312 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 38456 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:46.227000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.227000 audit[5382]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe680f72b0 a2=3 a3=0 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:46.227000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:46.231281 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:46.259344 systemd-logind[1580]: New session 15 of user core. Jan 19 12:02:46.272506 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 19 12:02:46.290000 audit[5382]: USER_START pid=5382 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.296000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.758192 sshd[5386]: Connection closed by 10.0.0.1 port 38456 Jan 19 12:02:46.756631 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:46.759000 audit[5382]: USER_END pid=5382 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.759000 audit[5382]: CRED_DISP pid=5382 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:46.768456 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 19 12:02:46.772739 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:38456.service: Deactivated successfully. Jan 19 12:02:46.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:38456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:46.787755 systemd[1]: session-15.scope: Deactivated successfully. Jan 19 12:02:46.795520 systemd-logind[1580]: Removed session 15. Jan 19 12:02:51.799608 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Jan 19 12:02:51.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:51.826248 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 19 12:02:51.826327 kernel: audit: type=1130 audit(1768824171.798:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:51.988000 audit[5403]: USER_ACCT pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:51.990390 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:51.997745 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:52.030491 kernel: audit: type=1101 audit(1768824171.988:797): pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.030585 kernel: audit: type=1103 audit(1768824171.993:798): pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:51.993000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.053580 systemd-logind[1580]: New session 16 of user core. Jan 19 12:02:51.993000 audit[5403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3de95f20 a2=3 a3=0 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:52.144816 kernel: audit: type=1006 audit(1768824171.993:799): pid=5403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 19 12:02:52.144913 kernel: audit: type=1300 audit(1768824171.993:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3de95f20 a2=3 a3=0 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:51.993000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:52.166659 kernel: audit: type=1327 audit(1768824171.993:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:52.166652 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 19 12:02:52.176000 audit[5403]: USER_START pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.221552 kernel: audit: type=1105 audit(1768824172.176:800): pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.221669 kernel: audit: type=1103 audit(1768824172.187:801): pid=5407 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.187000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.229737 kubelet[2784]: E0119 12:02:52.229511 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:02:52.236240 containerd[1598]: time="2026-01-19T12:02:52.234680183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:02:52.369332 containerd[1598]: time="2026-01-19T12:02:52.368177608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:52.387327 containerd[1598]: time="2026-01-19T12:02:52.387210461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:02:52.387327 containerd[1598]: time="2026-01-19T12:02:52.387323172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:52.390178 kubelet[2784]: E0119 12:02:52.387889 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:52.390178 kubelet[2784]: E0119 12:02:52.388247 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:02:52.390178 kubelet[2784]: E0119 12:02:52.388464 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:52.391327 containerd[1598]: time="2026-01-19T12:02:52.390890123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:02:52.489455 containerd[1598]: time="2026-01-19T12:02:52.489346212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:52.493875 containerd[1598]: time="2026-01-19T12:02:52.493687282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:02:52.497393 containerd[1598]: time="2026-01-19T12:02:52.495447448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:52.498266 kubelet[2784]: E0119 12:02:52.498220 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:52.498377 kubelet[2784]: E0119 12:02:52.498357 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:02:52.503352 kubelet[2784]: E0119 12:02:52.500280 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s6vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:52.503352 kubelet[2784]: E0119 12:02:52.502505 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:02:52.504565 containerd[1598]: time="2026-01-19T12:02:52.504347062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:02:52.578617 sshd[5407]: Connection closed by 10.0.0.1 port 38470 Jan 19 12:02:52.580329 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:52.582793 containerd[1598]: time="2026-01-19T12:02:52.582673235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:52.585000 audit[5403]: USER_END pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.590826 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:38470.service: Deactivated successfully. Jan 19 12:02:52.597386 containerd[1598]: time="2026-01-19T12:02:52.590589644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:52.597386 containerd[1598]: time="2026-01-19T12:02:52.590672560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:02:52.596666 systemd[1]: session-16.scope: Deactivated successfully. Jan 19 12:02:52.597572 kubelet[2784]: E0119 12:02:52.594906 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:52.597572 kubelet[2784]: E0119 12:02:52.597401 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:02:52.597635 kubelet[2784]: E0119 12:02:52.597565 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:52.599435 kubelet[2784]: E0119 12:02:52.598659 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:02:52.615646 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 19 12:02:52.619879 systemd-logind[1580]: Removed session 16. Jan 19 12:02:52.585000 audit[5403]: CRED_DISP pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.653325 kernel: audit: type=1106 audit(1768824172.585:802): pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.653617 kernel: audit: type=1104 audit(1768824172.585:803): pid=5403 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:52.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:53.234853 containerd[1598]: time="2026-01-19T12:02:53.234540529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:02:53.331785 containerd[1598]: time="2026-01-19T12:02:53.331516906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:53.336247 containerd[1598]: time="2026-01-19T12:02:53.335537074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:02:53.336521 containerd[1598]: time="2026-01-19T12:02:53.336232063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:53.340291 kubelet[2784]: E0119 12:02:53.339667 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:53.340291 kubelet[2784]: E0119 12:02:53.339723 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:02:53.341622 kubelet[2784]: E0119 12:02:53.341580 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8pgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:53.344602 kubelet[2784]: E0119 12:02:53.344532 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:02:53.344905 containerd[1598]: time="2026-01-19T12:02:53.344883009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 12:02:53.465566 containerd[1598]: time="2026-01-19T12:02:53.465386145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:53.472683 containerd[1598]: time="2026-01-19T12:02:53.472549891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 12:02:53.473635 containerd[1598]: time="2026-01-19T12:02:53.472927367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:53.476728 kubelet[2784]: E0119 12:02:53.473919 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:53.478533 kubelet[2784]: E0119 12:02:53.478340 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 12:02:53.479370 kubelet[2784]: E0119 12:02:53.478796 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f6d633e8fb4c48d3a2956485955c24a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:53.488882 containerd[1598]: time="2026-01-19T12:02:53.488472189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 12:02:53.570838 containerd[1598]: time="2026-01-19T12:02:53.570785625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:53.576529 containerd[1598]: time="2026-01-19T12:02:53.576489778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 12:02:53.577471 containerd[1598]: time="2026-01-19T12:02:53.576723847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:53.580325 kubelet[2784]: E0119 12:02:53.580283 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:53.581302 kubelet[2784]: E0119 12:02:53.580463 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 12:02:53.581302 kubelet[2784]: E0119 12:02:53.580613 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h446n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-757f87bb85-qrkvc_calico-system(cfea8a68-b054-48f1-88a4-ee8fd7bce007): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:53.586782 kubelet[2784]: E0119 12:02:53.586311 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:02:55.240779 kubelet[2784]: E0119 12:02:55.240555 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:02:57.234540 containerd[1598]: time="2026-01-19T12:02:57.224587592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 12:02:57.309427 containerd[1598]: time="2026-01-19T12:02:57.307680220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:02:57.317213 containerd[1598]: time="2026-01-19T12:02:57.316332977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 12:02:57.317213 containerd[1598]: time="2026-01-19T12:02:57.316806020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 12:02:57.318480 kubelet[2784]: E0119 12:02:57.317634 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:57.318480 kubelet[2784]: E0119 12:02:57.317784 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 12:02:57.320738 kubelet[2784]: E0119 12:02:57.320664 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xg6zc_calico-system(3f3cc3a4-a155-47d4-9e99-0f5c3bb53331): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 12:02:57.335706 kubelet[2784]: E0119 12:02:57.327796 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:02:57.611802 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:55936.service - OpenSSH per-connection server daemon (10.0.0.1:55936). Jan 19 12:02:57.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:57.622647 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:02:57.622733 kernel: audit: type=1130 audit(1768824177.612:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:02:57.891000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:57.892656 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 55936 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:02:57.897394 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:02:57.910425 systemd-logind[1580]: New session 17 of user core. Jan 19 12:02:57.895000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:57.971869 kernel: audit: type=1101 audit(1768824177.891:806): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:57.972200 kernel: audit: type=1103 audit(1768824177.895:807): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:57.997721 kernel: audit: type=1006 audit(1768824177.895:808): pid=5429 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 19 12:02:57.997845 kernel: audit: type=1300 audit(1768824177.895:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a7d50e0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:57.895000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a7d50e0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:02:57.895000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:58.048417 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 19 12:02:58.061335 kernel: audit: type=1327 audit(1768824177.895:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:02:58.064000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.119794 kernel: audit: type=1105 audit(1768824178.064:809): pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.119886 kernel: audit: type=1103 audit(1768824178.072:810): pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.072000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.333628 sshd[5433]: Connection closed by 10.0.0.1 port 55936 Jan 19 12:02:58.334418 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jan 19 12:02:58.337000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.387685 kernel: audit: type=1106 audit(1768824178.337:811): pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.340000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.391780 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:55936.service: Deactivated successfully. Jan 19 12:02:58.397750 systemd[1]: session-17.scope: Deactivated successfully. Jan 19 12:02:58.401629 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 19 12:02:58.404421 systemd-logind[1580]: Removed session 17. Jan 19 12:02:58.434303 kernel: audit: type=1104 audit(1768824178.340:812): pid=5429 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:02:58.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:03.241944 kubelet[2784]: E0119 12:03:03.241574 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:03:03.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:36932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:03.387888 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:36932.service - OpenSSH per-connection server daemon (10.0.0.1:36932). Jan 19 12:03:03.395350 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:03:03.395754 kernel: audit: type=1130 audit(1768824183.387:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:36932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:03.626000 audit[5450]: USER_ACCT pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.630702 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 36932 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:03.648922 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:03.633000 audit[5450]: CRED_ACQ pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.688695 systemd-logind[1580]: New session 18 of user core. Jan 19 12:03:03.718681 kernel: audit: type=1101 audit(1768824183.626:815): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.718815 kernel: audit: type=1103 audit(1768824183.633:816): pid=5450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.719654 kernel: audit: type=1006 audit(1768824183.638:817): pid=5450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 19 12:03:03.638000 audit[5450]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1b0cc7d0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:03.761861 kernel: audit: type=1300 audit(1768824183.638:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1b0cc7d0 a2=3 a3=0 items=0 ppid=1 pid=5450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:03.763839 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 19 12:03:03.805478 kernel: audit: type=1327 audit(1768824183.638:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:03.638000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:03.772000 audit[5450]: USER_START pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.889477 kernel: audit: type=1105 audit(1768824183.772:818): pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.889612 kernel: audit: type=1103 audit(1768824183.780:819): pid=5454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:03.780000 audit[5454]: CRED_ACQ pid=5454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:04.200576 sshd[5454]: Connection closed by 10.0.0.1 port 36932 Jan 19 12:03:04.202514 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:04.205000 audit[5450]: USER_END pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:04.213400 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:36932.service: Deactivated successfully. Jan 19 12:03:04.215583 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 19 12:03:04.220925 systemd[1]: session-18.scope: Deactivated successfully. Jan 19 12:03:04.228524 systemd-logind[1580]: Removed session 18. Jan 19 12:03:04.256359 containerd[1598]: time="2026-01-19T12:03:04.256326736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:03:04.206000 audit[5450]: CRED_DISP pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:04.314652 kernel: audit: type=1106 audit(1768824184.205:820): pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:04.315510 kernel: audit: type=1104 audit(1768824184.206:821): pid=5450 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:04.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:36932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:04.354151 containerd[1598]: time="2026-01-19T12:03:04.353759927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:03:04.360487 containerd[1598]: time="2026-01-19T12:03:04.359897449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:03:04.360487 containerd[1598]: time="2026-01-19T12:03:04.360365818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:03:04.360844 kubelet[2784]: E0119 12:03:04.360808 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:03:04.362931 kubelet[2784]: E0119 12:03:04.361730 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:03:04.362931 kubelet[2784]: E0119 12:03:04.362437 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcr55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-vv49f_calico-apiserver(bd84ecc0-0a49-4305-b46e-8f992897ba53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:03:04.364518 kubelet[2784]: E0119 12:03:04.363964 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:03:05.226639 kubelet[2784]: E0119 12:03:05.225348 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:03:06.257892 kubelet[2784]: E0119 12:03:06.257512 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:03:07.238010 kubelet[2784]: E0119 12:03:07.237689 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:03:09.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:36938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:09.225869 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:36938.service - OpenSSH per-connection server daemon (10.0.0.1:36938). Jan 19 12:03:09.233484 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:03:09.233824 kernel: audit: type=1130 audit(1768824189.225:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:36938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:09.405000 audit[5479]: USER_ACCT pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.407400 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:09.415661 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:09.409000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.442638 systemd-logind[1580]: New session 19 of user core. Jan 19 12:03:09.474569 kernel: audit: type=1101 audit(1768824189.405:824): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.474695 kernel: audit: type=1103 audit(1768824189.409:825): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.474747 kernel: audit: type=1006 audit(1768824189.409:826): pid=5479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 19 12:03:09.499556 kernel: audit: type=1300 audit(1768824189.409:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a4ba3e0 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:09.409000 audit[5479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a4ba3e0 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:09.409000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:09.565473 kernel: audit: type=1327 audit(1768824189.409:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:09.567418 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 19 12:03:09.577000 audit[5479]: USER_START pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.582000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.664523 kernel: audit: type=1105 audit(1768824189.577:827): pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.664665 kernel: audit: type=1103 audit(1768824189.582:828): pid=5498 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.893597 sshd[5498]: Connection closed by 10.0.0.1 port 36938 Jan 19 12:03:09.894003 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:09.896000 audit[5479]: USER_END pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.903912 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:36938.service: Deactivated successfully. Jan 19 12:03:09.908658 systemd[1]: session-19.scope: Deactivated successfully. Jan 19 12:03:09.914696 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 19 12:03:09.917878 systemd-logind[1580]: Removed session 19. Jan 19 12:03:09.899000 audit[5479]: CRED_DISP pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.949440 kernel: audit: type=1106 audit(1768824189.896:829): pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.949518 kernel: audit: type=1104 audit(1768824189.899:830): pid=5479 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:09.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:36938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:10.235426 kubelet[2784]: E0119 12:03:10.225645 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:03:14.230273 kubelet[2784]: E0119 12:03:14.229742 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:03:14.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:48322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:14.946752 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:48322.service - OpenSSH per-connection server daemon (10.0.0.1:48322). Jan 19 12:03:14.960530 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:03:14.960721 kernel: audit: type=1130 audit(1768824194.946:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:48322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:15.170000 audit[5512]: USER_ACCT pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.172677 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 48322 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:15.179340 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:15.210349 systemd-logind[1580]: New session 20 of user core. Jan 19 12:03:15.174000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.232493 kubelet[2784]: E0119 12:03:15.231935 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:03:15.260621 kernel: audit: type=1101 audit(1768824195.170:833): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.260787 kernel: audit: type=1103 audit(1768824195.174:834): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.260827 kernel: audit: type=1006 audit(1768824195.174:835): pid=5512 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 19 12:03:15.285524 kernel: audit: type=1300 audit(1768824195.174:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2716c60 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:15.174000 audit[5512]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2716c60 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:15.174000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:15.330909 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 19 12:03:15.352871 kernel: audit: type=1327 audit(1768824195.174:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:15.352965 kernel: audit: type=1105 audit(1768824195.351:836): pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.351000 audit[5512]: USER_START pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.358000 audit[5516]: CRED_ACQ pid=5516 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.456488 kernel: audit: type=1103 audit(1768824195.358:837): pid=5516 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.701992 sshd[5516]: Connection closed by 10.0.0.1 port 48322 Jan 19 12:03:15.704003 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:15.708000 audit[5512]: USER_END pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.710000 audit[5512]: CRED_DISP pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.778645 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:48322.service: Deactivated successfully. Jan 19 12:03:15.789799 systemd[1]: session-20.scope: Deactivated successfully. Jan 19 12:03:15.792943 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Jan 19 12:03:15.796512 systemd-logind[1580]: Removed session 20. Jan 19 12:03:15.800132 kernel: audit: type=1106 audit(1768824195.708:838): pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.800297 kernel: audit: type=1104 audit(1768824195.710:839): pid=5512 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:48322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:15.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:48336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:15.804484 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:48336.service - OpenSSH per-connection server daemon (10.0.0.1:48336). Jan 19 12:03:15.950000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.952880 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 48336 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:15.956000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:15.956000 audit[5530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda5b4b6d0 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:15.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:15.959573 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:15.983524 systemd-logind[1580]: New session 21 of user core. Jan 19 12:03:15.991369 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 19 12:03:16.001000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:16.008000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:16.222505 kubelet[2784]: E0119 12:03:16.221950 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:03:17.390800 sshd[5534]: Connection closed by 10.0.0.1 port 48336 Jan 19 12:03:17.391417 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:17.408000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:17.410000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:17.417723 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:48352.service - OpenSSH per-connection server daemon (10.0.0.1:48352). Jan 19 12:03:17.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:17.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:48336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:17.421340 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:48336.service: Deactivated successfully. Jan 19 12:03:17.425685 systemd[1]: session-21.scope: Deactivated successfully. Jan 19 12:03:17.432452 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Jan 19 12:03:17.441743 systemd-logind[1580]: Removed session 21. Jan 19 12:03:17.660000 audit[5543]: USER_ACCT pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:17.661893 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 48352 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:17.664000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:17.664000 audit[5543]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9d1d7030 a2=3 a3=0 items=0 ppid=1 pid=5543 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:17.664000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:17.666618 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:17.686619 systemd-logind[1580]: New session 22 of user core. Jan 19 12:03:17.699595 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 19 12:03:17.709000 audit[5543]: USER_START pid=5543 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:17.718000 audit[5550]: CRED_ACQ pid=5550 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.508000 audit[5564]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:19.508000 audit[5564]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff98967710 a2=0 a3=7fff989676fc items=0 ppid=2962 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:19.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:19.516551 sshd[5550]: Connection closed by 10.0.0.1 port 48352 Jan 19 12:03:19.519782 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:19.522000 audit[5564]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:19.522000 audit[5564]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff98967710 a2=0 a3=0 items=0 ppid=2962 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:19.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:19.537000 audit[5543]: USER_END pid=5543 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.537000 audit[5543]: CRED_DISP pid=5543 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.552689 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:48352.service: Deactivated successfully. Jan 19 12:03:19.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:19.561499 systemd[1]: session-22.scope: Deactivated successfully. Jan 19 12:03:19.563344 systemd[1]: session-22.scope: Consumed 1.029s CPU time, 43.2M memory peak. Jan 19 12:03:19.568513 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Jan 19 12:03:19.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:19.577345 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:48362.service - OpenSSH per-connection server daemon (10.0.0.1:48362). Jan 19 12:03:19.586894 systemd-logind[1580]: Removed session 22. Jan 19 12:03:19.617000 audit[5571]: NETFILTER_CFG table=filter:136 family=2 entries=38 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:19.617000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffca1c581b0 a2=0 a3=7ffca1c5819c items=0 ppid=2962 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:19.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:19.636000 audit[5571]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:19.636000 audit[5571]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca1c581b0 a2=0 a3=0 items=0 ppid=2962 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:19.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:19.741000 audit[5570]: USER_ACCT pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.743741 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 48362 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:19.746000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.746000 audit[5570]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb25f6c30 a2=3 a3=0 items=0 ppid=1 pid=5570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:19.746000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:19.749425 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:19.779480 systemd-logind[1580]: New session 23 of user core. Jan 19 12:03:19.796522 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 19 12:03:19.810000 audit[5570]: USER_START pid=5570 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:19.816000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.231737 kubelet[2784]: E0119 12:03:20.231440 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:03:20.231737 kubelet[2784]: E0119 12:03:20.231590 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:03:20.525857 sshd[5575]: Connection closed by 10.0.0.1 port 48362 Jan 19 12:03:20.531627 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:20.570482 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 19 12:03:20.570612 kernel: audit: type=1106 audit(1768824200.533:869): pid=5570 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.533000 audit[5570]: USER_END pid=5570 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.533000 audit[5570]: CRED_DISP pid=5570 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.650829 kernel: audit: type=1104 audit(1768824200.533:870): pid=5570 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.662948 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:48362.service: Deactivated successfully. Jan 19 12:03:20.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:20.676511 systemd[1]: session-23.scope: Deactivated successfully. Jan 19 12:03:20.683386 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Jan 19 12:03:20.692920 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:48368.service - OpenSSH per-connection server daemon (10.0.0.1:48368). Jan 19 12:03:20.695449 systemd-logind[1580]: Removed session 23. Jan 19 12:03:20.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:20.742000 kernel: audit: type=1131 audit(1768824200.664:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:48362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:20.742381 kernel: audit: type=1130 audit(1768824200.692:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:20.910705 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 48368 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:20.909000 audit[5587]: USER_ACCT pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.934419 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:20.956864 systemd-logind[1580]: New session 24 of user core. Jan 19 12:03:20.969530 kernel: audit: type=1101 audit(1768824200.909:873): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:20.919000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.034755 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 19 12:03:21.058549 kernel: audit: type=1103 audit(1768824200.919:874): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.058626 kernel: audit: type=1006 audit(1768824200.919:875): pid=5587 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 19 12:03:20.919000 audit[5587]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda42bad20 a2=3 a3=0 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:20.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:21.124879 kernel: audit: type=1300 audit(1768824200.919:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda42bad20 a2=3 a3=0 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:21.124996 kernel: audit: type=1327 audit(1768824200.919:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:21.044000 audit[5587]: USER_START pid=5587 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.177300 kernel: audit: type=1105 audit(1768824201.044:876): pid=5587 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.053000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.227599 kubelet[2784]: E0119 12:03:21.224734 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:03:21.227955 kubelet[2784]: E0119 12:03:21.227922 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:03:21.393574 sshd[5591]: Connection closed by 10.0.0.1 port 48368 Jan 19 12:03:21.394732 sshd-session[5587]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:21.404000 audit[5587]: USER_END pid=5587 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.404000 audit[5587]: CRED_DISP pid=5587 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:21.418499 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Jan 19 12:03:21.419679 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:48368.service: Deactivated successfully. Jan 19 12:03:21.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:21.428290 systemd[1]: session-24.scope: Deactivated successfully. Jan 19 12:03:21.436517 systemd-logind[1580]: Removed session 24. Jan 19 12:03:25.225841 kubelet[2784]: E0119 12:03:25.225416 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:03:26.236544 kubelet[2784]: E0119 12:03:26.234997 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:03:26.431586 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:35938.service - OpenSSH per-connection server daemon (10.0.0.1:35938). Jan 19 12:03:26.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:26.459381 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 19 12:03:26.459434 kernel: audit: type=1130 audit(1768824206.445:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:26.635000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.637474 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:26.646308 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:26.673689 kernel: audit: type=1101 audit(1768824206.635:882): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.642000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.681321 systemd-logind[1580]: New session 25 of user core. Jan 19 12:03:26.709375 kernel: audit: type=1103 audit(1768824206.642:883): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.642000 audit[5605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a8d6870 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:26.771617 kernel: audit: type=1006 audit(1768824206.642:884): pid=5605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 19 12:03:26.771772 kernel: audit: type=1300 audit(1768824206.642:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8a8d6870 a2=3 a3=0 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:26.771808 kernel: audit: type=1327 audit(1768824206.642:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:26.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:26.773820 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 19 12:03:26.786000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.793000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.862354 kernel: audit: type=1105 audit(1768824206.786:885): pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:26.862442 kernel: audit: type=1103 audit(1768824206.793:886): pid=5609 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:27.128560 sshd[5609]: Connection closed by 10.0.0.1 port 35938 Jan 19 12:03:27.133648 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:27.135000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:27.148734 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:35938.service: Deactivated successfully. Jan 19 12:03:27.157893 systemd[1]: session-25.scope: Deactivated successfully. Jan 19 12:03:27.165610 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Jan 19 12:03:27.169600 systemd-logind[1580]: Removed session 25. Jan 19 12:03:27.135000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:27.207567 kernel: audit: type=1106 audit(1768824207.135:887): pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:27.207659 kernel: audit: type=1104 audit(1768824207.135:888): pid=5605 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:27.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:35938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:29.230285 kubelet[2784]: E0119 12:03:29.228846 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:03:29.230285 kubelet[2784]: E0119 12:03:29.229588 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb" Jan 19 12:03:29.671000 audit[5623]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:29.671000 audit[5623]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc340ed3f0 a2=0 a3=7ffc340ed3dc items=0 ppid=2962 pid=5623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:29.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:29.680000 audit[5623]: NETFILTER_CFG table=nat:139 family=2 entries=104 op=nft_register_chain pid=5623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 12:03:29.680000 audit[5623]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc340ed3f0 a2=0 a3=7ffc340ed3dc items=0 ppid=2962 pid=5623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:29.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 12:03:30.230414 kubelet[2784]: E0119 12:03:30.228954 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-vv49f" podUID="bd84ecc0-0a49-4305-b46e-8f992897ba53" Jan 19 12:03:31.230900 kubelet[2784]: E0119 12:03:31.229790 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:03:32.148478 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:35954.service - OpenSSH per-connection server daemon (10.0.0.1:35954). Jan 19 12:03:32.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:35954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:32.169273 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 19 12:03:32.169336 kernel: audit: type=1130 audit(1768824212.148:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:35954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:32.313000 audit[5627]: USER_ACCT pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.318213 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:32.320618 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:32.341230 systemd-logind[1580]: New session 26 of user core. Jan 19 12:03:32.384647 kernel: audit: type=1101 audit(1768824212.313:893): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.384779 kernel: audit: type=1103 audit(1768824212.318:894): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.318000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.386570 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 19 12:03:32.318000 audit[5627]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4a47e50 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:32.449322 kernel: audit: type=1006 audit(1768824212.318:895): pid=5627 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 19 12:03:32.449471 kernel: audit: type=1300 audit(1768824212.318:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf4a47e50 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:32.449516 kernel: audit: type=1327 audit(1768824212.318:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:32.318000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:32.463875 kernel: audit: type=1105 audit(1768824212.394:896): pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.394000 audit[5627]: USER_START pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.399000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.556263 kernel: audit: type=1103 audit(1768824212.399:897): pid=5631 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.680440 sshd[5631]: Connection closed by 10.0.0.1 port 35954 Jan 19 12:03:32.681196 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:32.740489 kernel: audit: type=1106 audit(1768824212.696:898): pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.696000 audit[5627]: USER_END pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.702796 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:35954.service: Deactivated successfully. Jan 19 12:03:32.709305 systemd[1]: session-26.scope: Deactivated successfully. Jan 19 12:03:32.712646 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Jan 19 12:03:32.717578 systemd-logind[1580]: Removed session 26. Jan 19 12:03:32.696000 audit[5627]: CRED_DISP pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:32.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:35954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:32.779446 kernel: audit: type=1104 audit(1768824212.696:899): pid=5627 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:33.234655 kubelet[2784]: E0119 12:03:33.232784 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-757f87bb85-qrkvc" podUID="cfea8a68-b054-48f1-88a4-ee8fd7bce007" Jan 19 12:03:34.228721 kubelet[2784]: E0119 12:03:34.226556 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xg6zc" podUID="3f3cc3a4-a155-47d4-9e99-0f5c3bb53331" Jan 19 12:03:34.250734 containerd[1598]: time="2026-01-19T12:03:34.236605504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 12:03:34.337646 containerd[1598]: time="2026-01-19T12:03:34.337589379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:03:34.341700 containerd[1598]: time="2026-01-19T12:03:34.341542091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 12:03:34.341806 containerd[1598]: time="2026-01-19T12:03:34.341723959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 12:03:34.342427 kubelet[2784]: E0119 12:03:34.341979 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:03:34.342845 kubelet[2784]: E0119 12:03:34.342443 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 12:03:34.342845 kubelet[2784]: E0119 12:03:34.342586 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 12:03:34.349638 containerd[1598]: time="2026-01-19T12:03:34.348747660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 12:03:34.453363 containerd[1598]: time="2026-01-19T12:03:34.448734251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:03:34.454899 containerd[1598]: time="2026-01-19T12:03:34.453929085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 12:03:34.454899 containerd[1598]: time="2026-01-19T12:03:34.453971671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 12:03:34.455646 kubelet[2784]: E0119 12:03:34.454822 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:03:34.455646 kubelet[2784]: E0119 12:03:34.454881 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 12:03:34.460942 kubelet[2784]: E0119 12:03:34.460771 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-862rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xx9qj_calico-system(7402f958-3527-492d-aaa2-32f171fd00ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 12:03:34.463900 kubelet[2784]: E0119 12:03:34.462952 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xx9qj" podUID="7402f958-3527-492d-aaa2-32f171fd00ee" Jan 19 12:03:35.222767 kubelet[2784]: E0119 12:03:35.222540 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 19 12:03:37.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:57196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:37.703331 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:57196.service - OpenSSH per-connection server daemon (10.0.0.1:57196). Jan 19 12:03:37.716375 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 12:03:37.716463 kernel: audit: type=1130 audit(1768824217.702:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:57196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:37.989768 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 57196 ssh2: RSA SHA256:05ebSJX9ltM2zGrTpPgPurZKSQb1AAm8zLDuTaYZJ2A Jan 19 12:03:37.988000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.031997 kernel: audit: type=1101 audit(1768824217.988:902): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.031000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.034712 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 12:03:38.045711 systemd-logind[1580]: New session 27 of user core. Jan 19 12:03:38.067259 kernel: audit: type=1103 audit(1768824218.031:903): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.031000 audit[5650]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5cb9110 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:38.142661 kernel: audit: type=1006 audit(1768824218.031:904): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 19 12:03:38.142934 kernel: audit: type=1300 audit(1768824218.031:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5cb9110 a2=3 a3=0 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 12:03:38.031000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:38.143665 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 19 12:03:38.164587 kernel: audit: type=1327 audit(1768824218.031:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 12:03:38.164672 kernel: audit: type=1105 audit(1768824218.152:905): pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.152000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.157000 audit[5654]: CRED_ACQ pid=5654 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.248374 kernel: audit: type=1103 audit(1768824218.157:906): pid=5654 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.481700 sshd[5654]: Connection closed by 10.0.0.1 port 57196 Jan 19 12:03:38.484648 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Jan 19 12:03:38.487000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.501603 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:57196.service: Deactivated successfully. Jan 19 12:03:38.506879 systemd[1]: session-27.scope: Deactivated successfully. Jan 19 12:03:38.521524 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Jan 19 12:03:38.528830 systemd-logind[1580]: Removed session 27. Jan 19 12:03:38.487000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.573473 kernel: audit: type=1106 audit(1768824218.487:907): pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.573604 kernel: audit: type=1104 audit(1768824218.487:908): pid=5650 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 19 12:03:38.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:57196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 12:03:40.228525 containerd[1598]: time="2026-01-19T12:03:40.228394332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 12:03:40.309889 containerd[1598]: time="2026-01-19T12:03:40.309842670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:03:40.316426 containerd[1598]: time="2026-01-19T12:03:40.316375592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 12:03:40.316685 containerd[1598]: time="2026-01-19T12:03:40.316572304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 12:03:40.316743 kubelet[2784]: E0119 12:03:40.316697 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:03:40.317587 kubelet[2784]: E0119 12:03:40.316738 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 12:03:40.319787 containerd[1598]: time="2026-01-19T12:03:40.317892865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 12:03:40.322558 kubelet[2784]: E0119 12:03:40.320873 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s6vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74c9b656d5-9mc5x_calico-system(b7bafa95-2a0c-41ee-a149-7208583b6960): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 12:03:40.323969 kubelet[2784]: E0119 12:03:40.323426 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74c9b656d5-9mc5x" podUID="b7bafa95-2a0c-41ee-a149-7208583b6960" Jan 19 12:03:40.413352 containerd[1598]: time="2026-01-19T12:03:40.412589170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 12:03:40.415877 containerd[1598]: time="2026-01-19T12:03:40.415391435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 12:03:40.416569 containerd[1598]: time="2026-01-19T12:03:40.416378213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 12:03:40.417280 kubelet[2784]: E0119 12:03:40.416932 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:03:40.417280 kubelet[2784]: E0119 12:03:40.416994 2784 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 12:03:40.417817 kubelet[2784]: E0119 12:03:40.417616 2784 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8pgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75b8686f69-mdc5b_calico-apiserver(de448f12-2894-4550-a3a5-5ddf27420cbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 12:03:40.422385 kubelet[2784]: E0119 12:03:40.418863 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75b8686f69-mdc5b" podUID="de448f12-2894-4550-a3a5-5ddf27420cbb"