Jan 23 18:56:50.900406 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:56:50.900507 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:56:50.900555 kernel: BIOS-provided physical RAM map: Jan 23 18:56:50.900565 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:56:50.900571 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:56:50.900577 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:56:50.900584 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:56:50.900590 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:56:50.900597 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:56:50.900603 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:56:50.900609 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 18:56:50.900617 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:56:50.900623 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:56:50.900630 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:56:50.900637 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:56:50.900644 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:56:50.900653 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:56:50.900688 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:56:50.900695 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:56:50.900727 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:56:50.900734 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:56:50.900741 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:56:50.900918 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:56:50.900927 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:56:50.900933 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:56:50.900940 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:56:50.900950 kernel: NX (Execute Disable) protection: active Jan 23 18:56:50.900956 kernel: APIC: Static calls initialized Jan 23 18:56:50.900963 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 18:56:50.900970 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 18:56:50.900976 kernel: extended physical RAM map: Jan 23 18:56:50.900983 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:56:50.900989 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:56:50.900996 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:56:50.901030 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:56:50.901037 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:56:50.901069 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:56:50.901104 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:56:50.901110 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 18:56:50.901143 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 18:56:50.901178 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 18:56:50.901212 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 18:56:50.901244 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 18:56:50.901252 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 18:56:50.901259 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 18:56:50.901266 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 18:56:50.901273 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 18:56:50.901280 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 18:56:50.901287 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 18:56:50.901294 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 18:56:50.901303 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 18:56:50.901310 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 18:56:50.901317 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 18:56:50.901324 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 18:56:50.901331 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:56:50.901338 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:56:50.901345 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:56:50.901352 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:56:50.901359 kernel: efi: EFI v2.7 by EDK II Jan 23 18:56:50.901366 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 18:56:50.901373 kernel: random: crng init done Jan 23 18:56:50.901382 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:56:50.901389 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:56:50.901395 kernel: secureboot: Secure boot disabled Jan 23 18:56:50.901402 kernel: SMBIOS 2.8 present. Jan 23 18:56:50.901409 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:56:50.901416 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:56:50.901423 kernel: Hypervisor detected: KVM Jan 23 18:56:50.901430 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:56:50.901437 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:56:50.901444 kernel: kvm-clock: using sched offset of 6174006013 cycles Jan 23 18:56:50.901452 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:56:50.901461 kernel: tsc: Detected 2445.426 MHz processor Jan 23 18:56:50.901469 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:56:50.901476 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:56:50.901483 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 18:56:50.901491 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:56:50.901498 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:56:50.901505 kernel: Using GB pages for direct mapping Jan 23 18:56:50.901572 kernel: ACPI: Early table checksum verification disabled Jan 23 18:56:50.901580 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 18:56:50.901588 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:56:50.901595 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901603 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901610 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 18:56:50.901617 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901627 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901634 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901641 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:56:50.901648 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:56:50.901655 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 18:56:50.901663 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 18:56:50.901670 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 18:56:50.901681 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 18:56:50.901694 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 18:56:50.901707 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 18:56:50.901717 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 18:56:50.901726 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 18:56:50.901736 kernel: No NUMA configuration found Jan 23 18:56:50.901746 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 18:56:50.901756 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 18:56:50.901770 kernel: Zone ranges: Jan 23 18:56:50.901886 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:56:50.901896 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 18:56:50.901903 kernel: Normal empty Jan 23 18:56:50.901910 kernel: Device empty Jan 23 18:56:50.901917 kernel: Movable zone start for each node Jan 23 18:56:50.901925 kernel: Early memory node ranges Jan 23 18:56:50.901932 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:56:50.901943 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:56:50.901950 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:56:50.901957 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:56:50.901965 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 18:56:50.901972 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 18:56:50.901979 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 18:56:50.901986 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 18:56:50.901996 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 18:56:50.902003 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:56:50.902017 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:56:50.902027 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:56:50.902034 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:56:50.902042 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:56:50.902049 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:56:50.902057 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:56:50.902064 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:56:50.902072 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 18:56:50.902081 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:56:50.902089 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:56:50.902096 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:56:50.902104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:56:50.902113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:56:50.902120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:56:50.902128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:56:50.902135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:56:50.902143 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:56:50.902150 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:56:50.902157 kernel: TSC deadline timer available Jan 23 18:56:50.902167 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:56:50.902174 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:56:50.902181 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:56:50.902189 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:56:50.902196 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:56:50.902203 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:56:50.902211 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:56:50.902220 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:56:50.902227 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:56:50.902234 kernel: kvm-guest: setup PV sched yield Jan 23 18:56:50.902242 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 18:56:50.902249 kernel: Booting paravirtualized kernel on KVM Jan 23 18:56:50.902257 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:56:50.902264 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:56:50.902272 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:56:50.902282 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:56:50.902289 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:56:50.902298 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:56:50.902311 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:56:50.902325 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:56:50.902335 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:56:50.902346 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:56:50.902353 kernel: Fallback order for Node 0: 0 Jan 23 18:56:50.902361 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 18:56:50.902368 kernel: Policy zone: DMA32 Jan 23 18:56:50.902375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:56:50.902383 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:56:50.902390 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:56:50.902400 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:56:50.902407 kernel: Dynamic Preempt: voluntary Jan 23 18:56:50.902414 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:56:50.902426 kernel: rcu: RCU event tracing is enabled. Jan 23 18:56:50.902434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:56:50.902442 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:56:50.902449 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:56:50.902457 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:56:50.902466 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:56:50.902474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:56:50.902482 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:50.902489 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:50.902497 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:56:50.902504 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:56:50.902564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:56:50.902575 kernel: Console: colour dummy device 80x25 Jan 23 18:56:50.902582 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:56:50.902589 kernel: ACPI: Core revision 20240827 Jan 23 18:56:50.902597 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:56:50.902604 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:56:50.902612 kernel: x2apic enabled Jan 23 18:56:50.902619 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:56:50.902629 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:56:50.902641 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:56:50.902654 kernel: kvm-guest: setup PV IPIs Jan 23 18:56:50.902666 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:56:50.902679 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:56:50.902692 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 18:56:50.902703 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:56:50.902717 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:56:50.902728 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:56:50.902739 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:56:50.902750 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:56:50.902765 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:56:50.902895 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:56:50.902907 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:56:50.902919 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:56:50.902927 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:56:50.902934 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:56:50.902942 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:56:50.902950 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:56:50.902957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:56:50.902965 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:56:50.902974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:56:50.902982 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:56:50.902989 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:56:50.902997 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:56:50.903004 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:56:50.903011 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:56:50.903019 kernel: landlock: Up and running. Jan 23 18:56:50.903028 kernel: SELinux: Initializing. Jan 23 18:56:50.903036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:56:50.903043 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:56:50.903051 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:56:50.903059 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:56:50.903066 kernel: signal: max sigframe size: 1776 Jan 23 18:56:50.903074 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:56:50.903084 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:56:50.903091 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:56:50.903098 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:56:50.903106 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:56:50.903113 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:56:50.903120 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:56:50.903128 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:56:50.903137 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 18:56:50.903145 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120812K reserved, 0K cma-reserved) Jan 23 18:56:50.903152 kernel: devtmpfs: initialized Jan 23 18:56:50.903159 kernel: x86/mm: Memory block size: 128MB Jan 23 18:56:50.903167 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:56:50.903175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:56:50.903182 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:56:50.903192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 18:56:50.903199 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 18:56:50.903207 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 18:56:50.903215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:56:50.903222 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:56:50.903230 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:56:50.903237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:56:50.903246 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:56:50.903254 kernel: audit: type=2000 audit(1769194605.952:1): state=initialized audit_enabled=0 res=1 Jan 23 18:56:50.903261 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:56:50.903268 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:56:50.903275 kernel: cpuidle: using governor menu Jan 23 18:56:50.903283 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:56:50.903290 kernel: dca service started, version 1.12.1 Jan 23 18:56:50.903300 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:56:50.903307 kernel: PCI: Using configuration type 1 for base access Jan 23 18:56:50.903314 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:56:50.903322 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:56:50.903330 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:56:50.903337 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:56:50.903345 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:56:50.903354 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:56:50.903362 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:56:50.903369 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:56:50.903376 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:56:50.903384 kernel: ACPI: Interpreter enabled Jan 23 18:56:50.903391 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:56:50.903398 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:56:50.903407 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:56:50.903415 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:56:50.903422 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:56:50.903429 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:56:50.903775 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:56:50.904079 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:56:50.904263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:56:50.904274 kernel: PCI host bridge to bus 0000:00 Jan 23 18:56:50.904447 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:56:50.904695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:56:50.904986 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:56:50.905154 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 18:56:50.905316 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:56:50.905472 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:56:50.905705 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:56:50.906056 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:56:50.906274 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:56:50.906487 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 18:56:50.906908 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 18:56:50.907162 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:56:50.907416 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:56:50.907748 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:56:50.908088 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 18:56:50.908332 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 18:56:50.908638 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 18:56:50.908989 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:56:50.909219 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 18:56:50.909621 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 18:56:50.909960 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 18:56:50.910207 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:56:50.910435 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 18:56:50.910732 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 18:56:50.911109 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 18:56:50.911339 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 18:56:50.911640 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:56:50.912002 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:56:50.912274 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:56:50.912597 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 18:56:50.912970 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 18:56:50.913239 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:56:50.913505 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 18:56:50.913599 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:56:50.913612 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:56:50.913625 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:56:50.913637 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:56:50.913649 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:56:50.913661 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:56:50.913680 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:56:50.913691 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:56:50.913702 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:56:50.913712 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:56:50.913723 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:56:50.913734 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:56:50.913747 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:56:50.913765 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:56:50.913936 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:56:50.913955 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:56:50.913970 kernel: iommu: Default domain type: Translated Jan 23 18:56:50.913982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:56:50.913992 kernel: efivars: Registered efivars operations Jan 23 18:56:50.914003 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:56:50.914019 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:56:50.914029 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:56:50.914042 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:56:50.914054 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 18:56:50.914065 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 18:56:50.914075 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 18:56:50.914085 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 18:56:50.914100 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 18:56:50.914111 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 18:56:50.914323 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:56:50.914616 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:56:50.914944 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:56:50.914958 kernel: vgaarb: loaded Jan 23 18:56:50.915003 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:56:50.915042 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:56:50.915053 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:56:50.915061 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:56:50.915069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:56:50.915077 kernel: pnp: PnP ACPI init Jan 23 18:56:50.915268 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:56:50.915284 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:56:50.915292 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:56:50.915300 kernel: NET: Registered PF_INET protocol family Jan 23 18:56:50.915307 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:56:50.915315 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:56:50.915323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:56:50.915330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:56:50.915353 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:56:50.915363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:56:50.915370 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:56:50.915380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:56:50.915388 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:56:50.915396 kernel: NET: Registered PF_XDP protocol family Jan 23 18:56:50.915670 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:56:50.916425 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 18:56:50.916890 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:56:50.917061 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:56:50.917220 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:56:50.917377 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 18:56:50.917594 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:56:50.917761 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 18:56:50.917773 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:56:50.917852 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 18:56:50.917860 kernel: Initialise system trusted keyrings Jan 23 18:56:50.917869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:56:50.917880 kernel: Key type asymmetric registered Jan 23 18:56:50.917888 kernel: Asymmetric key parser 'x509' registered Jan 23 18:56:50.917898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:56:50.917906 kernel: io scheduler mq-deadline registered Jan 23 18:56:50.917913 kernel: io scheduler kyber registered Jan 23 18:56:50.917921 kernel: io scheduler bfq registered Jan 23 18:56:50.917928 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:56:50.917937 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:56:50.917946 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:56:50.917956 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:56:50.917965 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:56:50.917974 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:56:50.917982 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:56:50.917989 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:56:50.917999 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:56:50.918182 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:56:50.918194 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:56:50.918356 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:56:50.918571 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:56:48 UTC (1769194608) Jan 23 18:56:50.918738 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:56:50.918753 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:56:50.918761 kernel: efifb: probing for efifb Jan 23 18:56:50.918769 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 18:56:50.918849 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:56:50.918858 kernel: efifb: scrolling: redraw Jan 23 18:56:50.918865 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:56:50.918874 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:56:50.918885 kernel: fb0: EFI VGA frame buffer device Jan 23 18:56:50.918893 kernel: pstore: Using crash dump compression: deflate Jan 23 18:56:50.918900 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:56:50.918908 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:56:50.918916 kernel: Segment Routing with IPv6 Jan 23 18:56:50.918923 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:56:50.918931 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:56:50.918941 kernel: Key type dns_resolver registered Jan 23 18:56:50.918949 kernel: IPI shorthand broadcast: enabled Jan 23 18:56:50.918956 kernel: sched_clock: Marking stable (2728030927, 569735389)->(3519215289, -221448973) Jan 23 18:56:50.918964 kernel: registered taskstats version 1 Jan 23 18:56:50.918972 kernel: Loading compiled-in X.509 certificates Jan 23 18:56:50.918980 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:56:50.918988 kernel: Demotion targets for Node 0: null Jan 23 18:56:50.918997 kernel: Key type .fscrypt registered Jan 23 18:56:50.919005 kernel: Key type fscrypt-provisioning registered Jan 23 18:56:50.919013 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:56:50.919021 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:56:50.919029 kernel: ima: No architecture policies found Jan 23 18:56:50.919036 kernel: clk: Disabling unused clocks Jan 23 18:56:50.919044 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:56:50.919054 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:56:50.919093 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:56:50.919101 kernel: Run /init as init process Jan 23 18:56:50.919108 kernel: with arguments: Jan 23 18:56:50.919119 kernel: /init Jan 23 18:56:50.919156 kernel: with environment: Jan 23 18:56:50.919164 kernel: HOME=/ Jan 23 18:56:50.919172 kernel: TERM=linux Jan 23 18:56:50.919182 kernel: SCSI subsystem initialized Jan 23 18:56:50.919190 kernel: libata version 3.00 loaded. Jan 23 18:56:50.919371 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:56:50.919383 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:56:50.919620 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:56:50.919966 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:56:50.920150 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:56:50.920420 kernel: scsi host0: ahci Jan 23 18:56:50.920664 kernel: scsi host1: ahci Jan 23 18:56:50.921106 kernel: scsi host2: ahci Jan 23 18:56:50.921296 kernel: scsi host3: ahci Jan 23 18:56:50.921479 kernel: scsi host4: ahci Jan 23 18:56:50.921754 kernel: scsi host5: ahci Jan 23 18:56:50.921876 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 23 18:56:50.921888 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 23 18:56:50.921897 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 23 18:56:50.921905 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 23 18:56:50.921913 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 23 18:56:50.921926 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 23 18:56:50.921934 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:50.921942 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:50.921949 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:56:50.921957 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:50.921965 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:50.921973 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:56:50.921983 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:56:50.921991 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:56:50.921999 kernel: ata3.00: applying bridge limits Jan 23 18:56:50.922007 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:56:50.922015 kernel: ata3.00: configured for UDMA/100 Jan 23 18:56:50.922228 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:56:50.922418 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:56:50.922654 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 23 18:56:50.922671 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:56:50.922687 kernel: GPT:16515071 != 27000831 Jan 23 18:56:50.922700 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:56:50.922711 kernel: GPT:16515071 != 27000831 Jan 23 18:56:50.922722 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:56:50.922738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:56:50.923051 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:56:50.923066 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:56:50.923253 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:56:50.923264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:56:50.923272 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:56:50.923280 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:56:50.923292 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:56:50.923300 kernel: raid6: avx2x4 gen() 26969 MB/s Jan 23 18:56:50.923308 kernel: raid6: avx2x2 gen() 26093 MB/s Jan 23 18:56:50.923316 kernel: raid6: avx2x1 gen() 18265 MB/s Jan 23 18:56:50.923324 kernel: raid6: using algorithm avx2x4 gen() 26969 MB/s Jan 23 18:56:50.923332 kernel: raid6: .... xor() 4864 MB/s, rmw enabled Jan 23 18:56:50.923339 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:56:50.923350 kernel: xor: automatically using best checksumming function avx Jan 23 18:56:50.923358 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:56:50.923366 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 23 18:56:50.923376 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:56:50.923384 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:50.923392 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:56:50.923400 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:56:50.923410 kernel: loop: module loaded Jan 23 18:56:50.923418 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:56:50.923426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:56:50.923435 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:56:50.923446 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:50.923456 systemd[1]: Detected virtualization kvm. Jan 23 18:56:50.923466 systemd[1]: Detected architecture x86-64. Jan 23 18:56:50.923475 systemd[1]: Running in initrd. Jan 23 18:56:50.923483 systemd[1]: No hostname configured, using default hostname. Jan 23 18:56:50.923491 systemd[1]: Hostname set to . Jan 23 18:56:50.923500 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:56:50.923565 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:56:50.923578 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:56:50.923587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:50.923596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:50.923605 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:56:50.923613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:50.923622 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:56:50.923634 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:56:50.923642 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:50.923651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:50.923659 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:50.923673 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:50.923688 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:50.923702 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:50.923718 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:50.923729 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:50.923741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:50.923753 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:56:50.923769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:56:50.923878 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:56:50.923891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:50.923901 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:50.923909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:50.923918 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:50.923926 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:56:50.923935 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:56:50.923943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:50.923954 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:56:50.923963 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:56:50.923971 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:56:50.923979 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:50.923988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:50.923999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:50.924007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:50.924016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:50.924024 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:56:50.924033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:56:50.924071 systemd-journald[320]: Collecting audit messages is enabled. Jan 23 18:56:50.924091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:56:50.924100 kernel: Bridge firewalling registered Jan 23 18:56:50.924111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:50.924120 systemd-journald[320]: Journal started Jan 23 18:56:50.924137 systemd-journald[320]: Runtime Journal (/run/log/journal/7afb8aadbd484df3aa607baac8549a44) is 6M, max 48M, 42M free. Jan 23 18:56:50.919180 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 23 18:56:50.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.926879 kernel: audit: type=1130 audit(1769194610.925:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.936919 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:50.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.963291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:50.988666 kernel: audit: type=1130 audit(1769194610.952:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.988709 kernel: audit: type=1130 audit(1769194610.968:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.983768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:56:51.013140 kernel: audit: type=1130 audit(1769194610.994:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:50.998709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:56:51.020200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:51.041152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:51.044126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:51.069036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:51.091694 kernel: audit: type=1130 audit(1769194611.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.076944 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:56:51.091574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:56:51.129108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:51.171230 kernel: audit: type=1130 audit(1769194611.133:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.171276 kernel: audit: type=1130 audit(1769194611.152:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.171401 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:56:51.217203 kernel: audit: type=1130 audit(1769194611.175:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.217236 kernel: audit: type=1334 audit(1769194611.175:10): prog-id=6 op=LOAD Jan 23 18:56:51.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.175000 audit: BPF prog-id=6 op=LOAD Jan 23 18:56:51.153273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:51.171572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:51.182047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:51.299603 systemd-resolved[374]: Positive Trust Anchors: Jan 23 18:56:51.299663 systemd-resolved[374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:51.299669 systemd-resolved[374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:56:51.299713 systemd-resolved[374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:51.329662 systemd-resolved[374]: Defaulting to hostname 'linux'. Jan 23 18:56:51.372350 kernel: audit: type=1130 audit(1769194611.354:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.331275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:51.372863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:51.437875 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:56:51.458943 kernel: iscsi: registered transport (tcp) Jan 23 18:56:51.495712 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:56:51.496015 kernel: QLogic iSCSI HBA Driver Jan 23 18:56:51.545106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:51.581642 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:51.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.595708 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:51.613461 kernel: audit: type=1130 audit(1769194611.592:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.692756 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:51.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.707711 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:56:51.724024 kernel: audit: type=1130 audit(1769194611.704:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.729500 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:56:51.803074 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:51.836053 kernel: audit: type=1130 audit(1769194611.802:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.836081 kernel: audit: type=1334 audit(1769194611.804:15): prog-id=7 op=LOAD Jan 23 18:56:51.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.804000 audit: BPF prog-id=7 op=LOAD Jan 23 18:56:51.804000 audit: BPF prog-id=8 op=LOAD Jan 23 18:56:51.806439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:51.872574 systemd-udevd[593]: Using default interface naming scheme 'v257'. Jan 23 18:56:51.893985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:51.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:51.912216 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:56:51.973620 dracut-pre-trigger[645]: rd.md=0: removing MD RAID activation Jan 23 18:56:52.011062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:52.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.020000 audit: BPF prog-id=9 op=LOAD Jan 23 18:56:52.022261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:52.070932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:52.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.088456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:52.130251 systemd-networkd[722]: lo: Link UP Jan 23 18:56:52.130309 systemd-networkd[722]: lo: Gained carrier Jan 23 18:56:52.139130 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:52.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.139487 systemd[1]: Reached target network.target - Network. Jan 23 18:56:52.214161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:52.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.233076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:56:52.328998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:56:52.366280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:56:52.383194 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:56:52.403118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:56:52.416615 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:56:52.416622 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:52.418239 systemd-networkd[722]: eth0: Link UP Jan 23 18:56:52.419436 systemd-networkd[722]: eth0: Gained carrier Jan 23 18:56:52.476249 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:56:52.419452 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:56:52.426284 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:56:52.458087 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:56:52.471413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:52.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.471709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:52.513269 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:52.544879 kernel: AES CTR mode by8 optimization enabled Jan 23 18:56:52.554441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:52.571106 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:56:52.591963 disk-uuid[825]: Primary Header is updated. Jan 23 18:56:52.591963 disk-uuid[825]: Secondary Entries is updated. Jan 23 18:56:52.591963 disk-uuid[825]: Secondary Header is updated. Jan 23 18:56:52.627420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:52.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.736023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:52.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:52.737048 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:52.758396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:52.773308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:52.789498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:56:52.851634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:52.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:53.496304 systemd-networkd[722]: eth0: Gained IPv6LL Jan 23 18:56:53.656695 disk-uuid[834]: Warning: The kernel is still using the old partition table. Jan 23 18:56:53.656695 disk-uuid[834]: The new table will be used at the next reboot or after you Jan 23 18:56:53.656695 disk-uuid[834]: run partprobe(8) or kpartx(8) Jan 23 18:56:53.656695 disk-uuid[834]: The operation has completed successfully. Jan 23 18:56:53.684198 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:56:53.684478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:56:53.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:53.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:53.694746 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:56:53.752999 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Jan 23 18:56:53.753054 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:56:53.753067 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:53.767374 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:53.767456 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:53.784969 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:56:53.787586 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:56:53.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:53.797489 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:56:53.945476 ignition[887]: Ignition 2.24.0 Jan 23 18:56:53.945571 ignition[887]: Stage: fetch-offline Jan 23 18:56:53.945620 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:53.945633 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:53.945717 ignition[887]: parsed url from cmdline: "" Jan 23 18:56:53.945722 ignition[887]: no config URL provided Jan 23 18:56:53.945727 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:53.945738 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:53.945853 ignition[887]: op(1): [started] loading QEMU firmware config module Jan 23 18:56:53.945859 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:56:53.969514 ignition[887]: op(1): [finished] loading QEMU firmware config module Jan 23 18:56:54.324175 ignition[887]: parsing config with SHA512: c302d3785c3b94c16d3e6e1848d21af8c7138d7f213b88ae94034dab681b3cf3c07c8d2fdc9a9b7238c4c038e9fb7f0dfe3ebd5186353e4ef01076a9eb204582 Jan 23 18:56:54.336424 unknown[887]: fetched base config from "system" Jan 23 18:56:54.336689 unknown[887]: fetched user config from "qemu" Jan 23 18:56:54.337588 ignition[887]: fetch-offline: fetch-offline passed Jan 23 18:56:54.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:54.341924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:54.337675 ignition[887]: Ignition finished successfully Jan 23 18:56:54.345303 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:56:54.346441 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:56:54.410439 ignition[896]: Ignition 2.24.0 Jan 23 18:56:54.410489 ignition[896]: Stage: kargs Jan 23 18:56:54.410695 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:54.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:54.416929 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:56:54.410709 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:54.419130 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:56:54.411483 ignition[896]: kargs: kargs passed Jan 23 18:56:54.411573 ignition[896]: Ignition finished successfully Jan 23 18:56:54.464184 ignition[904]: Ignition 2.24.0 Jan 23 18:56:54.464234 ignition[904]: Stage: disks Jan 23 18:56:54.464365 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:54.464375 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:54.482086 ignition[904]: disks: disks passed Jan 23 18:56:54.482252 ignition[904]: Ignition finished successfully Jan 23 18:56:54.491033 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:56:54.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:54.491494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:54.503953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:56:54.513147 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:54.521751 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:54.526148 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:54.527577 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:56:54.586878 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 23 18:56:54.593411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:56:54.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:54.609623 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:56:54.798881 kernel: EXT4-fs (vda9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:56:54.799463 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:56:54.806690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:54.818308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:54.847742 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:56:54.857609 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:56:54.891874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (922) Jan 23 18:56:54.891908 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:56:54.891928 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:54.891944 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:54.891958 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:54.857706 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:56:54.857744 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:54.894085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:54.902360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:56:54.914130 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:56:55.196624 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:55.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:55.200092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:56:55.225442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:56:55.240629 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:56:55.250438 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:56:55.288059 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:56:55.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:55.299727 ignition[1018]: INFO : Ignition 2.24.0 Jan 23 18:56:55.299727 ignition[1018]: INFO : Stage: mount Jan 23 18:56:55.306465 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:55.306465 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:55.306465 ignition[1018]: INFO : mount: mount passed Jan 23 18:56:55.306465 ignition[1018]: INFO : Ignition finished successfully Jan 23 18:56:55.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:55.311346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:56:55.332358 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:56:55.801762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:55.847227 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Jan 23 18:56:55.847264 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:56:55.851191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:55.862862 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:56:55.862902 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:56:55.866162 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:55.923360 ignition[1047]: INFO : Ignition 2.24.0 Jan 23 18:56:55.923360 ignition[1047]: INFO : Stage: files Jan 23 18:56:55.930167 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:55.930167 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:55.944210 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:56:55.950907 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:56:55.950907 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:56:55.963003 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:56:55.969952 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:56:55.975961 unknown[1047]: wrote ssh authorized keys file for user: core Jan 23 18:56:55.980358 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:56:55.988595 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:55.988595 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:56:56.044286 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:56:56.274736 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:56.274736 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:56.298270 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:56:56.684381 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:56:57.090124 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:57.090124 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:56:57.106499 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:56:57.199386 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 23 18:56:57.199426 kernel: audit: type=1130 audit(1769194617.165:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.199482 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:57.199482 ignition[1047]: INFO : files: files passed Jan 23 18:56:57.199482 ignition[1047]: INFO : Ignition finished successfully Jan 23 18:56:57.325427 kernel: audit: type=1130 audit(1769194617.247:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.325451 kernel: audit: type=1131 audit(1769194617.248:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.325463 kernel: audit: type=1130 audit(1769194617.302:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.161417 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:56:57.168402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:56:57.200758 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:56:57.241730 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:56:57.359728 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:56:57.242002 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:56:57.371254 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:57.371254 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:57.299200 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:57.391475 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:57.303364 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:56:57.342424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:56:57.462311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:56:57.462660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:56:57.506476 kernel: audit: type=1130 audit(1769194617.466:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.506513 kernel: audit: type=1131 audit(1769194617.466:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.467735 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:56:57.510929 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:56:57.519733 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:56:57.521406 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:56:57.578147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:57.605217 kernel: audit: type=1130 audit(1769194617.577:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.582018 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:56:57.642653 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:56:57.643117 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:57.649740 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:57.669506 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:56:57.670048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:56:57.698615 kernel: audit: type=1131 audit(1769194617.676:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.670290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:57.704291 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:56:57.704633 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:56:57.713949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:56:57.721712 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:57.741248 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:57.741661 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:57.760046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:56:57.764907 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:57.775324 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:56:57.789222 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:56:57.794104 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:56:57.801971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:56:57.824198 kernel: audit: type=1131 audit(1769194617.805:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.802144 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:57.824519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:57.829351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:57.837761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:56:57.838329 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:57.880100 kernel: audit: type=1131 audit(1769194617.855:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.849078 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:56:57.849264 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:57.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.880284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:56:57.880530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:57.885090 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:56:57.897172 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:56:57.910323 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:57.914730 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:56:57.932905 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:56:57.933159 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:56:57.933307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:57.940646 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:56:57.940766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:57.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.948023 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:56:57.948096 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:56:57.955693 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:56:57.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.956017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:57.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.964028 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:56:57.964173 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:56:57.979155 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:56:58.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.982255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:56:58.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.982416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:58.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:57.995269 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:56:58.005630 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:56:58.005752 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:58.020693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:56:58.020879 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:58.024921 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:56:58.025020 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:58.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.082237 ignition[1105]: INFO : Ignition 2.24.0 Jan 23 18:56:58.082237 ignition[1105]: INFO : Stage: umount Jan 23 18:56:58.082237 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:58.082237 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:56:58.082237 ignition[1105]: INFO : umount: umount passed Jan 23 18:56:58.082237 ignition[1105]: INFO : Ignition finished successfully Jan 23 18:56:58.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.052119 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:56:58.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.066284 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:56:58.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.081923 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:56:58.082156 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:56:58.086869 systemd[1]: Stopped target network.target - Network. Jan 23 18:56:58.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.093028 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:56:58.093270 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:56:58.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.096188 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:56:58.096244 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:56:58.113611 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:56:58.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.213000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:56:58.113715 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:56:58.218000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:56:58.125052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:56:58.125111 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:58.135249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:56:58.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.145897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:56:58.152967 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:56:58.158450 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:56:58.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.158916 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:56:58.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.179932 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:56:58.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.180149 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:56:58.196764 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:56:58.197061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:56:58.215283 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:56:58.218697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:56:58.218756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:58.227746 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:56:58.227917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:58.247407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:56:58.251100 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:56:58.251178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:58.269233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:56:58.269303 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:58.273707 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:56:58.273771 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:58.305460 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:58.386959 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:56:58.387231 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:56:58.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.398192 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:56:58.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.398629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:58.403081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:56:58.403156 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:58.414330 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:56:58.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.414391 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:58.439515 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:56:58.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.439687 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:58.456653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:56:58.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.456742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:58.471237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:56:58.471322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:58.512257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:56:58.512454 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:56:58.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.512528 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:58.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.525361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:56:58.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.525436 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:58.535247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:58.535319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:58.578492 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:56:58.578978 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:56:58.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:56:58.585149 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:56:58.592438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:56:58.631296 systemd[1]: Switching root. Jan 23 18:56:58.679482 systemd-journald[320]: Journal stopped Jan 23 18:57:00.527242 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 23 18:57:00.527309 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:57:00.527328 kernel: SELinux: policy capability open_perms=1 Jan 23 18:57:00.527339 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:57:00.527353 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:57:00.527364 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:57:00.527375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:57:00.527390 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:57:00.527401 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:57:00.527416 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:57:00.527428 systemd[1]: Successfully loaded SELinux policy in 89.193ms. Jan 23 18:57:00.527458 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.874ms. Jan 23 18:57:00.527470 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:57:00.527482 systemd[1]: Detected virtualization kvm. Jan 23 18:57:00.527494 systemd[1]: Detected architecture x86-64. Jan 23 18:57:00.527510 systemd[1]: Detected first boot. Jan 23 18:57:00.527522 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:57:00.527534 zram_generator::config[1149]: No configuration found. Jan 23 18:57:00.527600 kernel: Guest personality initialized and is inactive Jan 23 18:57:00.527613 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:57:00.527624 kernel: Initialized host personality Jan 23 18:57:00.527637 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:57:00.527648 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:57:00.527660 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:57:00.527676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:57:00.527690 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:57:00.527705 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:57:00.527717 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:57:00.527729 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:57:00.527740 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:57:00.527753 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:57:00.527767 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:57:00.527851 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:57:00.527866 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:57:00.527877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:57:00.527895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:57:00.527906 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:57:00.527918 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:57:00.527932 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:57:00.527944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:57:00.527956 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:57:00.527967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:57:00.527980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:57:00.527992 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:57:00.528006 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:57:00.528018 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:57:00.528029 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:57:00.528041 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:57:00.528053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:57:00.528067 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:57:00.528082 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:57:00.528096 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:57:00.528108 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:57:00.528119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:57:00.528131 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:57:00.528147 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:57:00.528158 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:57:00.528170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:57:00.528182 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:57:00.528194 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:57:00.528207 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:57:00.528218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:57:00.528230 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:57:00.528245 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:57:00.528256 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:57:00.528268 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:57:00.528279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:00.528291 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:57:00.528303 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:57:00.528316 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:57:00.528329 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:57:00.528340 systemd[1]: Reached target machines.target - Containers. Jan 23 18:57:00.528352 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:57:00.528364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:00.528375 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:57:00.528387 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:57:00.528401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:57:00.528412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:57:00.528424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:57:00.528437 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:57:00.528449 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:57:00.528462 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:57:00.528475 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:57:00.528488 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:57:00.528501 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:57:00.528512 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:57:00.528524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:00.528538 kernel: ACPI: bus type drm_connector registered Jan 23 18:57:00.528598 kernel: fuse: init (API version 7.41) Jan 23 18:57:00.528610 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:57:00.528623 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:57:00.528634 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:57:00.528666 systemd-journald[1235]: Collecting audit messages is enabled. Jan 23 18:57:00.528691 systemd-journald[1235]: Journal started Jan 23 18:57:00.528711 systemd-journald[1235]: Runtime Journal (/run/log/journal/7afb8aadbd484df3aa607baac8549a44) is 6M, max 48M, 42M free. Jan 23 18:57:00.081000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:57:00.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.471000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:57:00.471000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:57:00.482000 audit: BPF prog-id=15 op=LOAD Jan 23 18:57:00.482000 audit: BPF prog-id=16 op=LOAD Jan 23 18:57:00.482000 audit: BPF prog-id=17 op=LOAD Jan 23 18:57:00.524000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:57:00.524000 audit[1235]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc0beade70 a2=4000 a3=0 items=0 ppid=1 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:00.524000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:56:59.691765 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:56:59.716923 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:56:59.717993 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:56:59.718519 systemd[1]: systemd-journald.service: Consumed 1.391s CPU time. Jan 23 18:57:00.546187 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:57:00.560356 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:57:00.568905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:57:00.583938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:00.596143 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:57:00.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.597489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:57:00.604441 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:57:00.610508 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:57:00.615206 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:57:00.620226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:57:00.625237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:57:00.629990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:57:00.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.635729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:57:00.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.642029 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:57:00.642309 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:57:00.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.648094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:57:00.648371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:57:00.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.654059 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:57:00.654325 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:57:00.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.659504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:57:00.660029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:57:00.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.665751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:57:00.666111 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:57:00.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.672145 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:57:00.672389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:57:00.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.677745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:57:00.683673 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:57:00.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.690388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:57:00.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.696671 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:57:00.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.714603 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:57:00.720294 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:57:00.728160 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:57:00.736484 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:57:00.741466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:57:00.741529 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:57:00.747060 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:57:00.753240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:00.753454 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:57:00.756053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:57:00.762951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:57:00.772970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:57:00.774330 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:57:00.777192 systemd-journald[1235]: Time spent on flushing to /var/log/journal/7afb8aadbd484df3aa607baac8549a44 is 14.590ms for 1175 entries. Jan 23 18:57:00.777192 systemd-journald[1235]: System Journal (/var/log/journal/7afb8aadbd484df3aa607baac8549a44) is 8M, max 163.5M, 155.5M free. Jan 23 18:57:00.810140 systemd-journald[1235]: Received client request to flush runtime journal. Jan 23 18:57:00.783324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:57:00.784660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:57:00.799055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:57:00.808142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:57:00.819044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:57:00.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.838371 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 18:57:00.827509 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:57:00.849766 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:57:00.858036 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:57:00.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.866212 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:57:00.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.873052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:57:00.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.879648 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:57:00.888041 kernel: loop2: detected capacity change from 0 to 111560 Jan 23 18:57:00.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.893171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:57:00.900331 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:57:00.906000 audit: BPF prog-id=18 op=LOAD Jan 23 18:57:00.906000 audit: BPF prog-id=19 op=LOAD Jan 23 18:57:00.906000 audit: BPF prog-id=20 op=LOAD Jan 23 18:57:00.913637 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:57:00.919000 audit: BPF prog-id=21 op=LOAD Jan 23 18:57:00.921309 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:57:00.931023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:57:00.941000 audit: BPF prog-id=22 op=LOAD Jan 23 18:57:00.942000 audit: BPF prog-id=23 op=LOAD Jan 23 18:57:00.942000 audit: BPF prog-id=24 op=LOAD Jan 23 18:57:00.944126 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:57:00.951765 kernel: loop3: detected capacity change from 0 to 50784 Jan 23 18:57:00.954000 audit: BPF prog-id=25 op=LOAD Jan 23 18:57:00.959000 audit: BPF prog-id=26 op=LOAD Jan 23 18:57:00.959000 audit: BPF prog-id=27 op=LOAD Jan 23 18:57:00.963028 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:57:00.970498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:57:00.971912 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:57:00.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:00.997250 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:57:00.997266 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Jan 23 18:57:01.010905 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 18:57:01.005128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:01.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.022497 systemd-nsresourced[1290]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:57:01.024737 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:57:01.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.038967 kernel: loop5: detected capacity change from 0 to 111560 Jan 23 18:57:01.041225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:57:01.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.069888 kernel: loop6: detected capacity change from 0 to 50784 Jan 23 18:57:01.091184 (sd-merge)[1295]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 23 18:57:01.100614 (sd-merge)[1295]: Merged extensions into '/usr'. Jan 23 18:57:01.106969 systemd[1]: Reload requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:57:01.107118 systemd[1]: Reloading... Jan 23 18:57:01.123053 systemd-oomd[1286]: No swap; memory pressure usage will be degraded Jan 23 18:57:01.140252 systemd-resolved[1288]: Positive Trust Anchors: Jan 23 18:57:01.140266 systemd-resolved[1288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:57:01.140271 systemd-resolved[1288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:57:01.140298 systemd-resolved[1288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:57:01.146964 systemd-resolved[1288]: Defaulting to hostname 'linux'. Jan 23 18:57:01.194879 zram_generator::config[1340]: No configuration found. Jan 23 18:57:01.441619 systemd[1]: Reloading finished in 333 ms. Jan 23 18:57:01.472756 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:57:01.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.480665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:57:01.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.487155 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:57:01.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.494706 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:57:01.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.511644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:01.541676 systemd[1]: Starting ensure-sysext.service... Jan 23 18:57:01.546771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:57:01.552000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:57:01.552000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:57:01.552000 audit: BPF prog-id=28 op=LOAD Jan 23 18:57:01.565000 audit: BPF prog-id=29 op=LOAD Jan 23 18:57:01.567207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:01.573000 audit: BPF prog-id=30 op=LOAD Jan 23 18:57:01.573000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:57:01.574000 audit: BPF prog-id=31 op=LOAD Jan 23 18:57:01.574000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:57:01.574000 audit: BPF prog-id=32 op=LOAD Jan 23 18:57:01.574000 audit: BPF prog-id=33 op=LOAD Jan 23 18:57:01.574000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:57:01.574000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:57:01.575000 audit: BPF prog-id=34 op=LOAD Jan 23 18:57:01.575000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:57:01.575000 audit: BPF prog-id=35 op=LOAD Jan 23 18:57:01.575000 audit: BPF prog-id=36 op=LOAD Jan 23 18:57:01.576000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:57:01.576000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:57:01.578000 audit: BPF prog-id=37 op=LOAD Jan 23 18:57:01.578000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:57:01.578000 audit: BPF prog-id=38 op=LOAD Jan 23 18:57:01.578000 audit: BPF prog-id=39 op=LOAD Jan 23 18:57:01.578000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:57:01.578000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:57:01.579000 audit: BPF prog-id=40 op=LOAD Jan 23 18:57:01.579000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:57:01.579000 audit: BPF prog-id=41 op=LOAD Jan 23 18:57:01.579000 audit: BPF prog-id=42 op=LOAD Jan 23 18:57:01.579000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:57:01.579000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:57:01.586995 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:57:01.587075 systemd[1]: Reloading... Jan 23 18:57:01.589498 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:57:01.589527 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:57:01.590001 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:57:01.591370 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 23 18:57:01.591484 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 23 18:57:01.599884 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:01.599897 systemd-tmpfiles[1378]: Skipping /boot Jan 23 18:57:01.602649 systemd-udevd[1379]: Using default interface naming scheme 'v257'. Jan 23 18:57:01.613110 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:01.613156 systemd-tmpfiles[1378]: Skipping /boot Jan 23 18:57:01.660015 zram_generator::config[1416]: No configuration found. Jan 23 18:57:01.778881 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:57:01.785870 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:57:01.805945 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:57:01.814020 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:57:01.815379 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:57:01.822853 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:57:01.923437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:57:01.929635 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:57:01.929768 systemd[1]: Reloading finished in 342 ms. Jan 23 18:57:01.954539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:01.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:01.962000 audit: BPF prog-id=43 op=LOAD Jan 23 18:57:01.962000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:57:01.962000 audit: BPF prog-id=44 op=LOAD Jan 23 18:57:01.962000 audit: BPF prog-id=45 op=LOAD Jan 23 18:57:01.962000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:57:01.962000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:57:01.963000 audit: BPF prog-id=46 op=LOAD Jan 23 18:57:01.963000 audit: BPF prog-id=47 op=LOAD Jan 23 18:57:01.963000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:57:01.963000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:57:01.973000 audit: BPF prog-id=48 op=LOAD Jan 23 18:57:01.973000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:57:01.973000 audit: BPF prog-id=49 op=LOAD Jan 23 18:57:01.973000 audit: BPF prog-id=50 op=LOAD Jan 23 18:57:01.973000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:57:01.973000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:57:01.975000 audit: BPF prog-id=51 op=LOAD Jan 23 18:57:01.975000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:57:01.975000 audit: BPF prog-id=52 op=LOAD Jan 23 18:57:01.975000 audit: BPF prog-id=53 op=LOAD Jan 23 18:57:01.975000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:57:01.975000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:57:01.979000 audit: BPF prog-id=54 op=LOAD Jan 23 18:57:01.979000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:57:01.979000 audit: BPF prog-id=55 op=LOAD Jan 23 18:57:01.979000 audit: BPF prog-id=56 op=LOAD Jan 23 18:57:01.980000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:57:01.980000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:57:01.983000 audit: BPF prog-id=57 op=LOAD Jan 23 18:57:01.983000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:57:01.996980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:02.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.130440 systemd[1]: Finished ensure-sysext.service. Jan 23 18:57:02.131839 kernel: kvm_amd: TSC scaling supported Jan 23 18:57:02.131883 kernel: kvm_amd: Nested Virtualization enabled Jan 23 18:57:02.131911 kernel: kvm_amd: Nested Paging enabled Jan 23 18:57:02.132921 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 18:57:02.132961 kernel: kvm_amd: PMU virtualization is disabled Jan 23 18:57:02.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.207507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:02.214734 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:02.222287 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:57:02.227496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:02.232022 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:57:02.241446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:57:02.248078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:57:02.254761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:57:02.259939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:02.260105 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:57:02.261870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:57:02.273867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:57:02.279011 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:02.282027 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:57:02.306886 kernel: kauditd_printk_skb: 165 callbacks suppressed Jan 23 18:57:02.306952 kernel: audit: type=1334 audit(1769194622.287:210): prog-id=58 op=LOAD Jan 23 18:57:02.306978 kernel: audit: type=1334 audit(1769194622.296:211): prog-id=59 op=LOAD Jan 23 18:57:02.287000 audit: BPF prog-id=58 op=LOAD Jan 23 18:57:02.296000 audit: BPF prog-id=59 op=LOAD Jan 23 18:57:02.296357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:57:02.299708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:57:02.309954 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:57:02.314079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:02.314455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:02.316212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:57:02.334920 kernel: audit: type=1130 audit(1769194622.316:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.316716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:57:02.331257 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:57:02.331482 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:57:02.334068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:57:02.334284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:57:02.337023 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:57:02.337262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:57:02.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.349979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:57:02.326000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.350410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:57:02.367486 kernel: audit: type=1131 audit(1769194622.326:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.367540 kernel: audit: type=1127 audit(1769194622.326:214): pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.367618 kernel: audit: type=1130 audit(1769194622.331:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.367949 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:57:02.379636 kernel: audit: type=1131 audit(1769194622.331:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.379711 kernel: audit: type=1130 audit(1769194622.331:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.379729 kernel: audit: type=1131 audit(1769194622.331:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.379742 kernel: audit: type=1130 audit(1769194622.336:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:02.441000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:57:02.441000 audit[1533]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe2f1dd450 a2=420 a3=0 items=0 ppid=1493 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:02.441000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:57:02.443711 augenrules[1533]: No rules Jan 23 18:57:02.447872 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:02.448267 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:02.455304 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:57:02.457661 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:57:02.473120 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:57:02.487084 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:57:02.504275 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:57:02.561991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:02.565938 systemd-networkd[1510]: lo: Link UP Jan 23 18:57:02.566195 systemd-networkd[1510]: lo: Gained carrier Jan 23 18:57:02.569179 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:57:02.569197 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:57:02.571331 systemd-networkd[1510]: eth0: Link UP Jan 23 18:57:02.572408 systemd-networkd[1510]: eth0: Gained carrier Jan 23 18:57:02.572461 systemd-networkd[1510]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:57:02.577647 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:57:02.584412 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:57:02.591235 systemd[1]: Reached target network.target - Network. Jan 23 18:57:02.596940 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:57:02.604287 systemd-networkd[1510]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:57:02.604739 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:57:02.606307 systemd-timesyncd[1511]: Network configuration changed, trying to establish connection. Jan 23 18:57:03.341076 systemd-timesyncd[1511]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 18:57:03.341191 systemd-timesyncd[1511]: Initial clock synchronization to Fri 2026-01-23 18:57:03.340948 UTC. Jan 23 18:57:03.341266 systemd-resolved[1288]: Clock change detected. Flushing caches. Jan 23 18:57:03.342351 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:57:03.384137 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:57:03.712708 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:57:03.719968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:57:03.728363 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:57:03.767003 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:57:03.774089 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:57:03.779155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:57:03.785021 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:57:03.790556 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:57:03.796335 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:57:03.801369 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:57:03.807034 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:57:03.812740 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:57:03.817888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:57:03.823478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:57:03.823544 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:57:03.827466 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:57:03.833398 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:57:03.840204 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:57:03.847568 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:57:03.853072 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:57:03.859241 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:57:03.868117 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:57:03.873550 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:57:03.880576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:57:03.887031 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:57:03.892434 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:57:03.897641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:03.897709 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:03.899166 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:57:03.905717 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:57:03.912132 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:57:03.919099 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:57:03.924729 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:57:03.929078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:57:03.935357 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:57:03.941098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:57:03.941981 jq[1563]: false Jan 23 18:57:03.946902 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:57:03.954019 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:57:03.956982 extend-filesystems[1564]: Found /dev/vda6 Jan 23 18:57:03.966142 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 23 18:57:03.963261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:57:03.959199 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Jan 23 18:57:03.966749 extend-filesystems[1564]: Found /dev/vda9 Jan 23 18:57:03.973994 extend-filesystems[1564]: Checking size of /dev/vda9 Jan 23 18:57:03.978097 oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 23 18:57:03.980948 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Jan 23 18:57:03.980948 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:03.980948 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 23 18:57:03.978113 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:03.978159 oslogin_cache_refresh[1565]: Refreshing group entry cache Jan 23 18:57:03.983228 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:57:03.987517 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:57:03.988141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:57:03.989251 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:57:03.997101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:57:03.997579 oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 23 18:57:03.998260 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Jan 23 18:57:03.998260 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:03.997661 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:04.001080 extend-filesystems[1564]: Resized partition /dev/vda9 Jan 23 18:57:04.014785 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 23 18:57:04.008295 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:57:04.015023 extend-filesystems[1589]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:57:04.029091 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:57:04.029427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:57:04.029995 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:57:04.030258 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:57:04.037370 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:57:04.038398 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:57:04.043511 jq[1588]: true Jan 23 18:57:04.045775 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:57:04.047212 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:57:04.072900 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 23 18:57:04.072953 jq[1597]: true Jan 23 18:57:04.087962 update_engine[1584]: I20260123 18:57:04.086792 1584 main.cc:92] Flatcar Update Engine starting Jan 23 18:57:04.111467 extend-filesystems[1589]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:57:04.111467 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 18:57:04.111467 extend-filesystems[1589]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 23 18:57:04.106327 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:57:04.135721 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Jan 23 18:57:04.106728 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:57:04.142104 dbus-daemon[1561]: [system] SELinux support is enabled Jan 23 18:57:04.158079 tar[1595]: linux-amd64/LICENSE Jan 23 18:57:04.158079 tar[1595]: linux-amd64/helm Jan 23 18:57:04.158375 update_engine[1584]: I20260123 18:57:04.147215 1584 update_check_scheduler.cc:74] Next update check in 10m11s Jan 23 18:57:04.143983 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:57:04.153153 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:57:04.160079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:57:04.160105 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:57:04.166240 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:57:04.166261 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:57:04.173877 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:57:04.181274 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:57:04.185570 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:57:04.187026 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:57:04.187321 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:57:04.187946 systemd-logind[1577]: New seat seat0. Jan 23 18:57:04.189282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:57:04.227280 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:57:04.244762 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:57:04.269425 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:57:04.280893 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:57:04.306950 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:57:04.313421 containerd[1599]: time="2026-01-23T18:57:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:57:04.315110 containerd[1599]: time="2026-01-23T18:57:04.315063706Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:57:04.321466 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:57:04.325215 containerd[1599]: time="2026-01-23T18:57:04.325181305Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.734µs" Jan 23 18:57:04.325215 containerd[1599]: time="2026-01-23T18:57:04.325207714Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:57:04.325283 containerd[1599]: time="2026-01-23T18:57:04.325241558Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:57:04.325283 containerd[1599]: time="2026-01-23T18:57:04.325253309Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:57:04.325477 containerd[1599]: time="2026-01-23T18:57:04.325404061Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:57:04.325504 containerd[1599]: time="2026-01-23T18:57:04.325480233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:57:04.325667 containerd[1599]: time="2026-01-23T18:57:04.325564701Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:57:04.325667 containerd[1599]: time="2026-01-23T18:57:04.325653266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326192 containerd[1599]: time="2026-01-23T18:57:04.326057191Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326192 containerd[1599]: time="2026-01-23T18:57:04.326078700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326192 containerd[1599]: time="2026-01-23T18:57:04.326092136Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326192 containerd[1599]: time="2026-01-23T18:57:04.326099720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326380 containerd[1599]: time="2026-01-23T18:57:04.326279726Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326380 containerd[1599]: time="2026-01-23T18:57:04.326345238Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326658 containerd[1599]: time="2026-01-23T18:57:04.326440586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.326798 containerd[1599]: time="2026-01-23T18:57:04.326733022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.327034 containerd[1599]: time="2026-01-23T18:57:04.326897108Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:57:04.327034 containerd[1599]: time="2026-01-23T18:57:04.326918107Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:57:04.327211 containerd[1599]: time="2026-01-23T18:57:04.326996644Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:57:04.328113 containerd[1599]: time="2026-01-23T18:57:04.327916672Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:57:04.328113 containerd[1599]: time="2026-01-23T18:57:04.327991652Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:57:04.334225 containerd[1599]: time="2026-01-23T18:57:04.334157794Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:57:04.334518 containerd[1599]: time="2026-01-23T18:57:04.334454869Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:57:04.334692 containerd[1599]: time="2026-01-23T18:57:04.334579261Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:57:04.334692 containerd[1599]: time="2026-01-23T18:57:04.334661764Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:57:04.334692 containerd[1599]: time="2026-01-23T18:57:04.334682133Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:57:04.334692 containerd[1599]: time="2026-01-23T18:57:04.334694024Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334704254Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334712369Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334722097Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334732006Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334740862Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334749408Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334761661Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:57:04.334775 containerd[1599]: time="2026-01-23T18:57:04.334775397Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:57:04.334987 containerd[1599]: time="2026-01-23T18:57:04.334965121Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:57:04.334987 containerd[1599]: time="2026-01-23T18:57:04.334984868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:57:04.335046 containerd[1599]: time="2026-01-23T18:57:04.335006298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:57:04.335046 containerd[1599]: time="2026-01-23T18:57:04.335016567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:57:04.335046 containerd[1599]: time="2026-01-23T18:57:04.335034230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:57:04.335046 containerd[1599]: time="2026-01-23T18:57:04.335044219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:57:04.335155 containerd[1599]: time="2026-01-23T18:57:04.335054277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:57:04.335155 containerd[1599]: time="2026-01-23T18:57:04.335145969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:57:04.335187 containerd[1599]: time="2026-01-23T18:57:04.335161217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:57:04.335187 containerd[1599]: time="2026-01-23T18:57:04.335171166Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:57:04.335187 containerd[1599]: time="2026-01-23T18:57:04.335179882Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:57:04.335239 containerd[1599]: time="2026-01-23T18:57:04.335199238Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:57:04.335257 containerd[1599]: time="2026-01-23T18:57:04.335238782Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:57:04.335257 containerd[1599]: time="2026-01-23T18:57:04.335249301Z" level=info msg="Start snapshots syncer" Jan 23 18:57:04.335876 containerd[1599]: time="2026-01-23T18:57:04.335494650Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:57:04.335908 containerd[1599]: time="2026-01-23T18:57:04.335781545Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:57:04.336121 containerd[1599]: time="2026-01-23T18:57:04.335919072Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:57:04.336121 containerd[1599]: time="2026-01-23T18:57:04.336012336Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336215175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336237937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336247575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336256241Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336266550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336276579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336285355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336300194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336311685Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336532848Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336549579Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336557283Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336565669Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:57:04.336723 containerd[1599]: time="2026-01-23T18:57:04.336572812Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336641801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336659935Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336676766Z" level=info msg="runtime interface created" Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336681746Z" level=info msg="created NRI interface" Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336689710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336699659Z" level=info msg="Connect containerd service" Jan 23 18:57:04.337252 containerd[1599]: time="2026-01-23T18:57:04.336717953Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:57:04.338509 containerd[1599]: time="2026-01-23T18:57:04.338052996Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:57:04.338982 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:39706.service - OpenSSH per-connection server daemon (10.0.0.1:39706). Jan 23 18:57:04.351104 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:57:04.351645 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:57:04.369912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:57:04.399945 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:57:04.408300 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:57:04.418346 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:57:04.423159 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:57:04.457636 containerd[1599]: time="2026-01-23T18:57:04.457508572Z" level=info msg="Start subscribing containerd event" Jan 23 18:57:04.457636 containerd[1599]: time="2026-01-23T18:57:04.457628236Z" level=info msg="Start recovering state" Jan 23 18:57:04.457740 containerd[1599]: time="2026-01-23T18:57:04.457718093Z" level=info msg="Start event monitor" Jan 23 18:57:04.457740 containerd[1599]: time="2026-01-23T18:57:04.457729344Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:57:04.457740 containerd[1599]: time="2026-01-23T18:57:04.457738642Z" level=info msg="Start streaming server" Jan 23 18:57:04.457892 containerd[1599]: time="2026-01-23T18:57:04.457747317Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:57:04.457892 containerd[1599]: time="2026-01-23T18:57:04.457754231Z" level=info msg="runtime interface starting up..." Jan 23 18:57:04.457892 containerd[1599]: time="2026-01-23T18:57:04.457759731Z" level=info msg="starting plugins..." Jan 23 18:57:04.457892 containerd[1599]: time="2026-01-23T18:57:04.457773286Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:57:04.459436 containerd[1599]: time="2026-01-23T18:57:04.458893007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:57:04.459436 containerd[1599]: time="2026-01-23T18:57:04.459034220Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:57:04.459223 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:57:04.466016 containerd[1599]: time="2026-01-23T18:57:04.465958913Z" level=info msg="containerd successfully booted in 0.152947s" Jan 23 18:57:04.482760 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:04.485792 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:04.495147 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:57:04.503755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:57:04.520057 systemd-logind[1577]: New session 1 of user core. Jan 23 18:57:04.534303 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:57:04.537192 tar[1595]: linux-amd64/README.md Jan 23 18:57:04.546125 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:57:04.563105 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:57:04.563354 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:04.571922 systemd-logind[1577]: New session 2 of user core. Jan 23 18:57:04.763501 systemd[1685]: Queued start job for default target default.target. Jan 23 18:57:04.782749 systemd[1685]: Created slice app.slice - User Application Slice. Jan 23 18:57:04.782919 systemd[1685]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:57:04.782940 systemd[1685]: Reached target paths.target - Paths. Jan 23 18:57:04.783064 systemd[1685]: Reached target timers.target - Timers. Jan 23 18:57:04.784965 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:57:04.786223 systemd[1685]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:57:04.801140 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:57:04.801286 systemd[1685]: Reached target sockets.target - Sockets. Jan 23 18:57:04.802479 systemd[1685]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:57:04.802709 systemd[1685]: Reached target basic.target - Basic System. Jan 23 18:57:04.802972 systemd[1685]: Reached target default.target - Main User Target. Jan 23 18:57:04.803012 systemd[1685]: Startup finished in 223ms. Jan 23 18:57:04.803152 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:57:04.818132 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:57:04.844435 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:39710.service - OpenSSH per-connection server daemon (10.0.0.1:39710). Jan 23 18:57:04.925739 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 39710 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:04.928109 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:04.936314 systemd-logind[1577]: New session 3 of user core. Jan 23 18:57:04.945135 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:57:04.970739 sshd[1704]: Connection closed by 10.0.0.1 port 39710 Jan 23 18:57:04.971149 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:04.989048 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:39710.service: Deactivated successfully. Jan 23 18:57:04.991178 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:57:04.992691 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:57:04.996318 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:39716.service - OpenSSH per-connection server daemon (10.0.0.1:39716). Jan 23 18:57:05.004466 systemd-logind[1577]: Removed session 3. Jan 23 18:57:05.089978 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 39716 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:05.092519 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:05.100445 systemd-logind[1577]: New session 4 of user core. Jan 23 18:57:05.110225 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:57:05.137030 sshd[1714]: Connection closed by 10.0.0.1 port 39716 Jan 23 18:57:05.137455 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:05.143123 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:39716.service: Deactivated successfully. Jan 23 18:57:05.146147 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:57:05.147684 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:57:05.149668 systemd-logind[1577]: Removed session 4. Jan 23 18:57:05.359273 systemd-networkd[1510]: eth0: Gained IPv6LL Jan 23 18:57:05.363698 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:57:05.373126 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:57:05.383343 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 18:57:05.392137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:05.403099 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:57:05.445338 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:57:05.454056 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 18:57:05.454444 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 18:57:05.463141 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:57:06.567173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:06.580088 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:57:06.587091 systemd[1]: Startup finished in 4.226s (kernel) + 8.717s (initrd) + 7.038s (userspace) = 19.982s. Jan 23 18:57:06.600410 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:07.226712 kubelet[1742]: E0123 18:57:07.226538 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:07.231119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:07.231479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:07.232377 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 261.8M memory peak. Jan 23 18:57:15.150313 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:53638.service - OpenSSH per-connection server daemon (10.0.0.1:53638). Jan 23 18:57:15.220893 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 53638 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:15.222788 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.230104 systemd-logind[1577]: New session 5 of user core. Jan 23 18:57:15.240034 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:57:15.259492 sshd[1760]: Connection closed by 10.0.0.1 port 53638 Jan 23 18:57:15.259931 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.269787 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:53638.service: Deactivated successfully. Jan 23 18:57:15.271798 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:57:15.273103 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:57:15.276352 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Jan 23 18:57:15.276954 systemd-logind[1577]: Removed session 5. Jan 23 18:57:15.347186 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:15.348756 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.356320 systemd-logind[1577]: New session 6 of user core. Jan 23 18:57:15.366078 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:57:15.379764 sshd[1771]: Connection closed by 10.0.0.1 port 53642 Jan 23 18:57:15.380247 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.397054 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:53642.service: Deactivated successfully. Jan 23 18:57:15.399044 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:57:15.400477 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:57:15.403353 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:53644.service - OpenSSH per-connection server daemon (10.0.0.1:53644). Jan 23 18:57:15.404791 systemd-logind[1577]: Removed session 6. Jan 23 18:57:15.486106 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 53644 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:15.487998 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.495202 systemd-logind[1577]: New session 7 of user core. Jan 23 18:57:15.504154 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:57:15.522603 sshd[1781]: Connection closed by 10.0.0.1 port 53644 Jan 23 18:57:15.523330 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.534228 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:53644.service: Deactivated successfully. Jan 23 18:57:15.536352 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:57:15.537599 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:57:15.540756 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Jan 23 18:57:15.542048 systemd-logind[1577]: Removed session 7. Jan 23 18:57:15.621503 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:15.623452 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.629994 systemd-logind[1577]: New session 8 of user core. Jan 23 18:57:15.640081 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:57:15.669432 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:57:15.670002 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:15.683135 sudo[1793]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:15.685081 sshd[1792]: Connection closed by 10.0.0.1 port 53650 Jan 23 18:57:15.685317 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.704515 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:53650.service: Deactivated successfully. Jan 23 18:57:15.707193 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:57:15.708692 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:57:15.712547 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). Jan 23 18:57:15.713797 systemd-logind[1577]: Removed session 8. Jan 23 18:57:15.775345 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:15.777250 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.785509 systemd-logind[1577]: New session 9 of user core. Jan 23 18:57:15.795255 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:57:15.819054 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:57:15.819588 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:15.825023 sudo[1806]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:15.838267 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:57:15.839017 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:15.849511 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:15.903000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:57:15.905998 augenrules[1830]: No rules Jan 23 18:57:15.907557 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:15.908141 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:15.910492 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 23 18:57:15.910545 kernel: audit: type=1305 audit(1769194635.903:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:57:15.909480 sudo[1805]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:15.903000 audit[1830]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9a1d3e20 a2=420 a3=0 items=0 ppid=1811 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:15.919328 sshd[1804]: Connection closed by 10.0.0.1 port 53658 Jan 23 18:57:15.921047 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.934987 kernel: audit: type=1300 audit(1769194635.903:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9a1d3e20 a2=420 a3=0 items=0 ppid=1811 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:15.903000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:57:15.942312 kernel: audit: type=1327 audit(1769194635.903:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:57:15.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.942954 kernel: audit: type=1130 audit(1769194635.906:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.947916 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:53658.service: Deactivated successfully. Jan 23 18:57:15.950247 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:57:15.951704 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:57:15.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.953938 kernel: audit: type=1131 audit(1769194635.906:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.954756 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Jan 23 18:57:15.955552 systemd-logind[1577]: Removed session 9. Jan 23 18:57:15.907000 audit[1805]: USER_END pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.976552 kernel: audit: type=1106 audit(1769194635.907:225): pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.976593 kernel: audit: type=1104 audit(1769194635.907:226): pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.907000 audit[1805]: CRED_DISP pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.921000 audit[1800]: USER_END pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.007615 kernel: audit: type=1106 audit(1769194635.921:227): pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.007719 kernel: audit: type=1104 audit(1769194635.921:228): pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:15.921000 audit[1800]: CRED_DISP pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.019945 kernel: audit: type=1131 audit(1769194635.946:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.151:22-10.0.0.1:53658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:15.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.151:22-10.0.0.1:53658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:16.021242 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:57:16.023069 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:16.029958 systemd-logind[1577]: New session 10 of user core. Jan 23 18:57:16.031328 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:57:15.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:16.019000 audit[1839]: USER_ACCT pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.020000 audit[1839]: CRED_ACQ pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.020000 audit[1839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe2244850 a2=3 a3=0 items=0 ppid=1 pid=1839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.020000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:57:16.032000 audit[1839]: USER_START pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.035000 audit[1843]: CRED_ACQ pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:16.049000 audit[1844]: USER_ACCT pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:16.051990 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:57:16.050000 audit[1844]: CRED_REFR pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:16.052384 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:16.050000 audit[1844]: USER_START pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:16.446897 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:57:16.461201 (dockerd)[1865]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:57:16.749795 dockerd[1865]: time="2026-01-23T18:57:16.749550671Z" level=info msg="Starting up" Jan 23 18:57:16.750759 dockerd[1865]: time="2026-01-23T18:57:16.750710778Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:57:16.768586 dockerd[1865]: time="2026-01-23T18:57:16.768517552Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:57:16.833037 dockerd[1865]: time="2026-01-23T18:57:16.832942474Z" level=info msg="Loading containers: start." Jan 23 18:57:16.846938 kernel: Initializing XFRM netlink socket Jan 23 18:57:16.945000 audit[1919]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.945000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff407c66a0 a2=0 a3=0 items=0 ppid=1865 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:57:16.950000 audit[1921]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.950000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff371e1830 a2=0 a3=0 items=0 ppid=1865 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:57:16.955000 audit[1923]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.955000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d3e9ff0 a2=0 a3=0 items=0 ppid=1865 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:57:16.959000 audit[1925]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.959000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc85884040 a2=0 a3=0 items=0 ppid=1865 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:57:16.965000 audit[1927]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.965000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc0171c70 a2=0 a3=0 items=0 ppid=1865 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:57:16.970000 audit[1929]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.970000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe74cfbe60 a2=0 a3=0 items=0 ppid=1865 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:57:16.975000 audit[1931]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.975000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff43e3b940 a2=0 a3=0 items=0 ppid=1865 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:57:16.980000 audit[1933]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:16.980000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc60368160 a2=0 a3=0 items=0 ppid=1865 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:16.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:57:17.026000 audit[1936]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.026000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff5a91bfa0 a2=0 a3=0 items=0 ppid=1865 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 18:57:17.031000 audit[1938]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.031000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc5d432080 a2=0 a3=0 items=0 ppid=1865 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:57:17.036000 audit[1940]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.036000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe36cfdda0 a2=0 a3=0 items=0 ppid=1865 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:57:17.041000 audit[1942]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.041000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcdd85fe70 a2=0 a3=0 items=0 ppid=1865 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:57:17.046000 audit[1944]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.046000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff333fd800 a2=0 a3=0 items=0 ppid=1865 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:57:17.126000 audit[1974]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.126000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff5676f0d0 a2=0 a3=0 items=0 ppid=1865 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:57:17.130000 audit[1976]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.130000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7fed3b50 a2=0 a3=0 items=0 ppid=1865 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.130000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:57:17.136000 audit[1978]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.136000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfae1ac40 a2=0 a3=0 items=0 ppid=1865 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:57:17.140000 audit[1980]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.140000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe89db9fe0 a2=0 a3=0 items=0 ppid=1865 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:57:17.145000 audit[1982]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.145000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefb2aae30 a2=0 a3=0 items=0 ppid=1865 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:57:17.150000 audit[1984]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.150000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc24b35c60 a2=0 a3=0 items=0 ppid=1865 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:57:17.154000 audit[1986]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.154000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeaa8ab260 a2=0 a3=0 items=0 ppid=1865 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:57:17.160000 audit[1988]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.160000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff2627b7c0 a2=0 a3=0 items=0 ppid=1865 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:57:17.166000 audit[1990]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.166000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd2442d040 a2=0 a3=0 items=0 ppid=1865 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 18:57:17.170000 audit[1992]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.170000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd657de60 a2=0 a3=0 items=0 ppid=1865 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:57:17.175000 audit[1994]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.175000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff06709090 a2=0 a3=0 items=0 ppid=1865 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:57:17.180000 audit[1996]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.180000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff405d52c0 a2=0 a3=0 items=0 ppid=1865 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:57:17.185000 audit[1998]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.185000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe7cf716b0 a2=0 a3=0 items=0 ppid=1865 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:57:17.197000 audit[2003]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.197000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcae7ab3c0 a2=0 a3=0 items=0 ppid=1865 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:57:17.202000 audit[2005]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.202000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff3e3774f0 a2=0 a3=0 items=0 ppid=1865 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.202000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:57:17.207000 audit[2007]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.207000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc1aeb13b0 a2=0 a3=0 items=0 ppid=1865 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:57:17.213000 audit[2009]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.213000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefc0a1120 a2=0 a3=0 items=0 ppid=1865 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:57:17.218000 audit[2011]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.218000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdc9dcebc0 a2=0 a3=0 items=0 ppid=1865 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:57:17.223000 audit[2013]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:17.223000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffedb433940 a2=0 a3=0 items=0 ppid=1865 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:57:17.252000 audit[2018]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.252000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff23289280 a2=0 a3=0 items=0 ppid=1865 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 18:57:17.255454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:57:17.258076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:17.259000 audit[2021]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.259000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffffdec4f90 a2=0 a3=0 items=0 ppid=1865 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 18:57:17.281000 audit[2031]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.281000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd9ed1dcf0 a2=0 a3=0 items=0 ppid=1865 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 18:57:17.301000 audit[2037]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.301000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff0b531fb0 a2=0 a3=0 items=0 ppid=1865 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 18:57:17.308000 audit[2039]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.308000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc46949130 a2=0 a3=0 items=0 ppid=1865 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 18:57:17.313000 audit[2041]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.313000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffa1bbc750 a2=0 a3=0 items=0 ppid=1865 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 18:57:17.319000 audit[2043]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.319000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff02f2a9f0 a2=0 a3=0 items=0 ppid=1865 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:57:17.324000 audit[2045]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:17.324000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd5a6b1110 a2=0 a3=0 items=0 ppid=1865 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:17.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 18:57:17.327440 systemd-networkd[1510]: docker0: Link UP Jan 23 18:57:17.382498 dockerd[1865]: time="2026-01-23T18:57:17.382361753Z" level=info msg="Loading containers: done." Jan 23 18:57:17.406143 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2227150489-merged.mount: Deactivated successfully. Jan 23 18:57:17.463535 dockerd[1865]: time="2026-01-23T18:57:17.463361264Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:57:17.463535 dockerd[1865]: time="2026-01-23T18:57:17.463480376Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:57:17.463783 dockerd[1865]: time="2026-01-23T18:57:17.463568240Z" level=info msg="Initializing buildkit" Jan 23 18:57:17.498794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:17.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:17.514190 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:17.523397 dockerd[1865]: time="2026-01-23T18:57:17.523220523Z" level=info msg="Completed buildkit initialization" Jan 23 18:57:17.536784 dockerd[1865]: time="2026-01-23T18:57:17.536527481Z" level=info msg="Daemon has completed initialization" Jan 23 18:57:17.536784 dockerd[1865]: time="2026-01-23T18:57:17.536578596Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:57:17.537528 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:57:17.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:17.597783 kubelet[2068]: E0123 18:57:17.597624 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:17.603631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:17.604005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:17.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:57:17.604559 systemd[1]: kubelet.service: Consumed 270ms CPU time, 112.1M memory peak. Jan 23 18:57:18.369453 containerd[1599]: time="2026-01-23T18:57:18.368792519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 18:57:19.104904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235343495.mount: Deactivated successfully. Jan 23 18:57:20.164212 containerd[1599]: time="2026-01-23T18:57:20.164120496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:20.165615 containerd[1599]: time="2026-01-23T18:57:20.165528305Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25399329" Jan 23 18:57:20.167397 containerd[1599]: time="2026-01-23T18:57:20.167331621Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:20.170465 containerd[1599]: time="2026-01-23T18:57:20.170365996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:20.171373 containerd[1599]: time="2026-01-23T18:57:20.171231519Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.802294751s" Jan 23 18:57:20.171373 containerd[1599]: time="2026-01-23T18:57:20.171360200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 18:57:20.172156 containerd[1599]: time="2026-01-23T18:57:20.172073569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 18:57:21.428495 containerd[1599]: time="2026-01-23T18:57:21.428327485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:21.430009 containerd[1599]: time="2026-01-23T18:57:21.429936650Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 23 18:57:21.431897 containerd[1599]: time="2026-01-23T18:57:21.431875119Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:21.435363 containerd[1599]: time="2026-01-23T18:57:21.435303816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:21.437290 containerd[1599]: time="2026-01-23T18:57:21.437222623Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.265082349s" Jan 23 18:57:21.437290 containerd[1599]: time="2026-01-23T18:57:21.437249462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 18:57:21.438115 containerd[1599]: time="2026-01-23T18:57:21.437887169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 18:57:22.411491 containerd[1599]: time="2026-01-23T18:57:22.411388277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:22.412754 containerd[1599]: time="2026-01-23T18:57:22.412623997Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 23 18:57:22.413964 containerd[1599]: time="2026-01-23T18:57:22.413901760Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:22.417878 containerd[1599]: time="2026-01-23T18:57:22.417104292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:22.420102 containerd[1599]: time="2026-01-23T18:57:22.420006951Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 982.09188ms" Jan 23 18:57:22.420102 containerd[1599]: time="2026-01-23T18:57:22.420074496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 18:57:22.420919 containerd[1599]: time="2026-01-23T18:57:22.420886739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:57:23.432055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622258598.mount: Deactivated successfully. Jan 23 18:57:23.754153 containerd[1599]: time="2026-01-23T18:57:23.753945141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.755446 containerd[1599]: time="2026-01-23T18:57:23.755370549Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 23 18:57:23.756520 containerd[1599]: time="2026-01-23T18:57:23.756452039Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.758736 containerd[1599]: time="2026-01-23T18:57:23.758602894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.759145 containerd[1599]: time="2026-01-23T18:57:23.759077491Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.338038307s" Jan 23 18:57:23.759145 containerd[1599]: time="2026-01-23T18:57:23.759133855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:57:23.760302 containerd[1599]: time="2026-01-23T18:57:23.760084976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 18:57:24.346095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307036747.mount: Deactivated successfully. Jan 23 18:57:25.386258 containerd[1599]: time="2026-01-23T18:57:25.386187899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.387865 containerd[1599]: time="2026-01-23T18:57:25.387739886Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=326" Jan 23 18:57:25.389557 containerd[1599]: time="2026-01-23T18:57:25.389484363Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.392900 containerd[1599]: time="2026-01-23T18:57:25.392727117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.394276 containerd[1599]: time="2026-01-23T18:57:25.394166124Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.63400199s" Jan 23 18:57:25.394276 containerd[1599]: time="2026-01-23T18:57:25.394252395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 18:57:25.395063 containerd[1599]: time="2026-01-23T18:57:25.394983850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 18:57:25.778265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790109359.mount: Deactivated successfully. Jan 23 18:57:25.786415 containerd[1599]: time="2026-01-23T18:57:25.785952111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.787311 containerd[1599]: time="2026-01-23T18:57:25.787214729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 23 18:57:25.789083 containerd[1599]: time="2026-01-23T18:57:25.789061416Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.792961 containerd[1599]: time="2026-01-23T18:57:25.792935619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.794112 containerd[1599]: time="2026-01-23T18:57:25.794045449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 399.034067ms" Jan 23 18:57:25.794112 containerd[1599]: time="2026-01-23T18:57:25.794084662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 18:57:25.794505 containerd[1599]: time="2026-01-23T18:57:25.794483088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 18:57:26.220546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553131557.mount: Deactivated successfully. Jan 23 18:57:27.854605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:57:27.856602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:28.068747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:28.071883 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 23 18:57:28.071963 kernel: audit: type=1130 audit(1769194648.067:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:28.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:28.098252 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:28.166143 kubelet[2287]: E0123 18:57:28.165939 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:28.169499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:28.169762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:28.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:57:28.170355 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.2M memory peak. Jan 23 18:57:28.181000 kernel: audit: type=1131 audit(1769194648.168:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:57:29.906084 containerd[1599]: time="2026-01-23T18:57:29.905914684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.907105 containerd[1599]: time="2026-01-23T18:57:29.907066609Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 23 18:57:29.908345 containerd[1599]: time="2026-01-23T18:57:29.908276188Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.911473 containerd[1599]: time="2026-01-23T18:57:29.911409638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.912511 containerd[1599]: time="2026-01-23T18:57:29.912425474Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.117862837s" Jan 23 18:57:29.912511 containerd[1599]: time="2026-01-23T18:57:29.912486859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 18:57:32.972901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:32.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:32.973161 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.2M memory peak. Jan 23 18:57:32.976015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:32.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:32.993267 kernel: audit: type=1130 audit(1769194652.971:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:32.993318 kernel: audit: type=1131 audit(1769194652.971:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:33.014299 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-10.scope)... Jan 23 18:57:33.014363 systemd[1]: Reloading... Jan 23 18:57:33.131923 zram_generator::config[2376]: No configuration found. Jan 23 18:57:33.352171 systemd[1]: Reloading finished in 337 ms. Jan 23 18:57:33.379000 audit: BPF prog-id=63 op=LOAD Jan 23 18:57:33.379000 audit: BPF prog-id=58 op=UNLOAD Jan 23 18:57:33.387993 kernel: audit: type=1334 audit(1769194653.379:286): prog-id=63 op=LOAD Jan 23 18:57:33.388055 kernel: audit: type=1334 audit(1769194653.379:287): prog-id=58 op=UNLOAD Jan 23 18:57:33.388083 kernel: audit: type=1334 audit(1769194653.379:288): prog-id=64 op=LOAD Jan 23 18:57:33.379000 audit: BPF prog-id=64 op=LOAD Jan 23 18:57:33.391373 kernel: audit: type=1334 audit(1769194653.379:289): prog-id=48 op=UNLOAD Jan 23 18:57:33.379000 audit: BPF prog-id=48 op=UNLOAD Jan 23 18:57:33.394926 kernel: audit: type=1334 audit(1769194653.379:290): prog-id=65 op=LOAD Jan 23 18:57:33.379000 audit: BPF prog-id=65 op=LOAD Jan 23 18:57:33.379000 audit: BPF prog-id=66 op=LOAD Jan 23 18:57:33.401511 kernel: audit: type=1334 audit(1769194653.379:291): prog-id=66 op=LOAD Jan 23 18:57:33.401552 kernel: audit: type=1334 audit(1769194653.379:292): prog-id=49 op=UNLOAD Jan 23 18:57:33.379000 audit: BPF prog-id=49 op=UNLOAD Jan 23 18:57:33.405760 kernel: audit: type=1334 audit(1769194653.379:293): prog-id=50 op=UNLOAD Jan 23 18:57:33.379000 audit: BPF prog-id=50 op=UNLOAD Jan 23 18:57:33.409202 kernel: audit: type=1334 audit(1769194653.379:294): prog-id=67 op=LOAD Jan 23 18:57:33.379000 audit: BPF prog-id=67 op=LOAD Jan 23 18:57:33.412338 kernel: audit: type=1334 audit(1769194653.379:295): prog-id=57 op=UNLOAD Jan 23 18:57:33.379000 audit: BPF prog-id=57 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=68 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=60 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=69 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=70 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=61 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=62 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=71 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=54 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=72 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=73 op=LOAD Jan 23 18:57:33.383000 audit: BPF prog-id=55 op=UNLOAD Jan 23 18:57:33.383000 audit: BPF prog-id=56 op=UNLOAD Jan 23 18:57:33.386000 audit: BPF prog-id=74 op=LOAD Jan 23 18:57:33.386000 audit: BPF prog-id=75 op=LOAD Jan 23 18:57:33.386000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:57:33.386000 audit: BPF prog-id=47 op=UNLOAD Jan 23 18:57:33.386000 audit: BPF prog-id=76 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=59 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=77 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=78 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=79 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=80 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=51 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=81 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=82 op=LOAD Jan 23 18:57:33.395000 audit: BPF prog-id=52 op=UNLOAD Jan 23 18:57:33.395000 audit: BPF prog-id=53 op=UNLOAD Jan 23 18:57:33.418284 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:57:33.418426 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:57:33.418968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:33.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:57:33.419136 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.5M memory peak. Jan 23 18:57:33.421948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:33.610653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:33.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:33.622197 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:33.682108 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:33.682108 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:33.682369 kubelet[2425]: I0123 18:57:33.682112 2425 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:34.422268 kubelet[2425]: I0123 18:57:34.422199 2425 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:57:34.422268 kubelet[2425]: I0123 18:57:34.422249 2425 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:34.424994 kubelet[2425]: I0123 18:57:34.424938 2425 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:57:34.424994 kubelet[2425]: I0123 18:57:34.424982 2425 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:34.425268 kubelet[2425]: I0123 18:57:34.425205 2425 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:34.493942 kubelet[2425]: E0123 18:57:34.493629 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:57:34.494406 kubelet[2425]: I0123 18:57:34.494254 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:34.499779 kubelet[2425]: I0123 18:57:34.499670 2425 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:34.508123 kubelet[2425]: I0123 18:57:34.508064 2425 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:57:34.509511 kubelet[2425]: I0123 18:57:34.509445 2425 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:34.509660 kubelet[2425]: I0123 18:57:34.509503 2425 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:34.509660 kubelet[2425]: I0123 18:57:34.509656 2425 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:34.509919 kubelet[2425]: I0123 18:57:34.509664 2425 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:57:34.509919 kubelet[2425]: I0123 18:57:34.509784 2425 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:57:34.513253 kubelet[2425]: I0123 18:57:34.513170 2425 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:34.513747 kubelet[2425]: I0123 18:57:34.513612 2425 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:57:34.513747 kubelet[2425]: I0123 18:57:34.513654 2425 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:34.513747 kubelet[2425]: I0123 18:57:34.513672 2425 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:57:34.513747 kubelet[2425]: I0123 18:57:34.513739 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:34.514529 kubelet[2425]: E0123 18:57:34.514508 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:34.514578 kubelet[2425]: E0123 18:57:34.514558 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:34.518517 kubelet[2425]: I0123 18:57:34.517898 2425 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:57:34.518517 kubelet[2425]: I0123 18:57:34.518387 2425 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:34.518517 kubelet[2425]: I0123 18:57:34.518409 2425 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:57:34.518517 kubelet[2425]: W0123 18:57:34.518446 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:57:34.522411 kubelet[2425]: I0123 18:57:34.522353 2425 server.go:1262] "Started kubelet" Jan 23 18:57:34.522540 kubelet[2425]: I0123 18:57:34.522454 2425 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:34.522581 kubelet[2425]: I0123 18:57:34.522540 2425 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:57:34.523075 kubelet[2425]: I0123 18:57:34.522928 2425 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:34.523075 kubelet[2425]: I0123 18:57:34.523011 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:34.523471 kubelet[2425]: I0123 18:57:34.523169 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:34.527467 kubelet[2425]: I0123 18:57:34.526965 2425 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:57:34.527467 kubelet[2425]: E0123 18:57:34.525883 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d712a295245d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:57:34.522295762 +0000 UTC m=+0.894765252,LastTimestamp:2026-01-23 18:57:34.522295762 +0000 UTC m=+0.894765252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:57:34.527467 kubelet[2425]: E0123 18:57:34.527456 2425 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:57:34.527646 kubelet[2425]: E0123 18:57:34.527498 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:34.527646 kubelet[2425]: I0123 18:57:34.527522 2425 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:57:34.527646 kubelet[2425]: I0123 18:57:34.527629 2425 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:57:34.527764 kubelet[2425]: I0123 18:57:34.527676 2425 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:57:34.528471 kubelet[2425]: I0123 18:57:34.528457 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:34.528630 kubelet[2425]: E0123 18:57:34.528557 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:34.528969 kubelet[2425]: I0123 18:57:34.528898 2425 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:34.529129 kubelet[2425]: I0123 18:57:34.529065 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:34.530981 kubelet[2425]: E0123 18:57:34.530482 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Jan 23 18:57:34.531451 kubelet[2425]: I0123 18:57:34.531272 2425 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:34.532000 audit[2442]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.532000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff4a16d8a0 a2=0 a3=0 items=0 ppid=2425 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:57:34.536000 audit[2445]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.536000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb49d6f10 a2=0 a3=0 items=0 ppid=2425 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.536000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:57:34.541000 audit[2448]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.541000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2d1dce10 a2=0 a3=0 items=0 ppid=2425 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:57:34.546000 audit[2452]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.546000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcfcc95cd0 a2=0 a3=0 items=0 ppid=2425 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:57:34.549903 kubelet[2425]: I0123 18:57:34.549685 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:34.549946 kubelet[2425]: I0123 18:57:34.549904 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:34.549946 kubelet[2425]: I0123 18:57:34.549923 2425 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:34.553305 kubelet[2425]: I0123 18:57:34.553181 2425 policy_none.go:49] "None policy: Start" Jan 23 18:57:34.553305 kubelet[2425]: I0123 18:57:34.553229 2425 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:57:34.553305 kubelet[2425]: I0123 18:57:34.553241 2425 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:57:34.555222 kubelet[2425]: I0123 18:57:34.555182 2425 policy_none.go:47] "Start" Jan 23 18:57:34.558000 audit[2455]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.558000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff35b00650 a2=0 a3=0 items=0 ppid=2425 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.558000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 23 18:57:34.560925 kubelet[2425]: I0123 18:57:34.560449 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:34.560000 audit[2457]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:34.560000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3ba11d70 a2=0 a3=0 items=0 ppid=2425 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:57:34.563190 kubelet[2425]: I0123 18:57:34.563079 2425 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:34.563190 kubelet[2425]: I0123 18:57:34.563098 2425 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:57:34.563190 kubelet[2425]: I0123 18:57:34.563113 2425 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:57:34.563190 kubelet[2425]: E0123 18:57:34.563147 2425 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:34.561000 audit[2458]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.561000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcab301d50 a2=0 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:57:34.565246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:57:34.563000 audit[2459]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:34.563000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7315fc80 a2=0 a3=0 items=0 ppid=2425 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:57:34.566111 kubelet[2425]: E0123 18:57:34.565975 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:57:34.565000 audit[2460]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.565000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf2dbbef0 a2=0 a3=0 items=0 ppid=2425 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:57:34.566000 audit[2461]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:34.566000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc19a31780 a2=0 a3=0 items=0 ppid=2425 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:57:34.568000 audit[2463]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:34.568000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd47cbc820 a2=0 a3=0 items=0 ppid=2425 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:57:34.568000 audit[2464]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:34.568000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef3db0050 a2=0 a3=0 items=0 ppid=2425 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:34.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:57:34.577562 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:57:34.582092 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:57:34.605122 kubelet[2425]: E0123 18:57:34.605061 2425 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:34.605346 kubelet[2425]: I0123 18:57:34.605249 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:34.605346 kubelet[2425]: I0123 18:57:34.605262 2425 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:34.605591 kubelet[2425]: I0123 18:57:34.605517 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:34.607100 kubelet[2425]: E0123 18:57:34.607039 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:34.607100 kubelet[2425]: E0123 18:57:34.607074 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:57:34.678264 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 23 18:57:34.693941 kubelet[2425]: E0123 18:57:34.693799 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:34.696673 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 23 18:57:34.699232 kubelet[2425]: E0123 18:57:34.699217 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:34.702273 systemd[1]: Created slice kubepods-burstable-pod16c1a764fb2c2c13872406f100fb26e2.slice - libcontainer container kubepods-burstable-pod16c1a764fb2c2c13872406f100fb26e2.slice. Jan 23 18:57:34.704564 kubelet[2425]: E0123 18:57:34.704383 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:34.706671 kubelet[2425]: I0123 18:57:34.706490 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:57:34.706765 kubelet[2425]: E0123 18:57:34.706689 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 23 18:57:34.731506 kubelet[2425]: E0123 18:57:34.731284 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Jan 23 18:57:34.829278 kubelet[2425]: I0123 18:57:34.829117 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:57:34.829278 kubelet[2425]: I0123 18:57:34.829180 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:34.829278 kubelet[2425]: I0123 18:57:34.829224 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:34.829278 kubelet[2425]: I0123 18:57:34.829275 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:34.829480 kubelet[2425]: I0123 18:57:34.829297 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:34.829480 kubelet[2425]: I0123 18:57:34.829325 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:34.829480 kubelet[2425]: I0123 18:57:34.829349 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:34.829480 kubelet[2425]: I0123 18:57:34.829376 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:34.829601 kubelet[2425]: I0123 18:57:34.829526 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:34.909630 kubelet[2425]: I0123 18:57:34.909565 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:57:34.910056 kubelet[2425]: E0123 18:57:34.909961 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 23 18:57:35.021293 kubelet[2425]: E0123 18:57:35.020913 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.022097 containerd[1599]: time="2026-01-23T18:57:35.022007634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:35.024628 kubelet[2425]: E0123 18:57:35.024557 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.025342 containerd[1599]: time="2026-01-23T18:57:35.025288686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:35.028164 kubelet[2425]: E0123 18:57:35.028087 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.028647 containerd[1599]: time="2026-01-23T18:57:35.028528643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16c1a764fb2c2c13872406f100fb26e2,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:35.131852 kubelet[2425]: E0123 18:57:35.131782 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Jan 23 18:57:35.312402 kubelet[2425]: I0123 18:57:35.312145 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:57:35.312634 kubelet[2425]: E0123 18:57:35.312543 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 23 18:57:35.446692 kubelet[2425]: E0123 18:57:35.446542 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:35.460232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605498601.mount: Deactivated successfully. Jan 23 18:57:35.469161 containerd[1599]: time="2026-01-23T18:57:35.469082861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:35.472728 containerd[1599]: time="2026-01-23T18:57:35.472526179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:57:35.474961 containerd[1599]: time="2026-01-23T18:57:35.474937241Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:35.477395 containerd[1599]: time="2026-01-23T18:57:35.477340344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:35.479118 containerd[1599]: time="2026-01-23T18:57:35.478787776Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:35.480193 containerd[1599]: time="2026-01-23T18:57:35.480164059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:57:35.482167 containerd[1599]: time="2026-01-23T18:57:35.481913155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:57:35.483965 containerd[1599]: time="2026-01-23T18:57:35.483898816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:35.484509 containerd[1599]: time="2026-01-23T18:57:35.484447062Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.522652ms" Jan 23 18:57:35.485394 kubelet[2425]: E0123 18:57:35.485175 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:35.488068 containerd[1599]: time="2026-01-23T18:57:35.487988722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.348662ms" Jan 23 18:57:35.489941 containerd[1599]: time="2026-01-23T18:57:35.489894750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.395009ms" Jan 23 18:57:35.518147 containerd[1599]: time="2026-01-23T18:57:35.516905764Z" level=info msg="connecting to shim d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351" address="unix:///run/containerd/s/df816f77a12df06a7d050708ddfd03d26600bad0a38b77e7e3be592684292aa7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:35.530621 containerd[1599]: time="2026-01-23T18:57:35.530467745Z" level=info msg="connecting to shim 381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59" address="unix:///run/containerd/s/cdb7fb6a40eac03e0f0c607224258f1e6ecaf4406ef8d05000c0e2057589cb58" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:35.533274 containerd[1599]: time="2026-01-23T18:57:35.533077530Z" level=info msg="connecting to shim 9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735" address="unix:///run/containerd/s/369113090188c9e25b65c0d62a673fd94f98a0ebfe760b5db851469f88ecc5ab" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:35.563158 systemd[1]: Started cri-containerd-d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351.scope - libcontainer container d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351. Jan 23 18:57:35.568799 systemd[1]: Started cri-containerd-381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59.scope - libcontainer container 381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59. Jan 23 18:57:35.587077 systemd[1]: Started cri-containerd-9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735.scope - libcontainer container 9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735. Jan 23 18:57:35.589000 audit: BPF prog-id=83 op=LOAD Jan 23 18:57:35.590000 audit: BPF prog-id=84 op=LOAD Jan 23 18:57:35.590000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.590000 audit: BPF prog-id=84 op=UNLOAD Jan 23 18:57:35.590000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.591000 audit: BPF prog-id=85 op=LOAD Jan 23 18:57:35.591000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.591000 audit: BPF prog-id=86 op=LOAD Jan 23 18:57:35.591000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.591000 audit: BPF prog-id=86 op=UNLOAD Jan 23 18:57:35.591000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.591000 audit: BPF prog-id=85 op=UNLOAD Jan 23 18:57:35.591000 audit[2530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.591000 audit: BPF prog-id=87 op=LOAD Jan 23 18:57:35.591000 audit[2530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2503 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338313236386166393865303138636633333361323263386139636133 Jan 23 18:57:35.592000 audit: BPF prog-id=88 op=LOAD Jan 23 18:57:35.593000 audit: BPF prog-id=89 op=LOAD Jan 23 18:57:35.593000 audit[2507]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.593000 audit: BPF prog-id=89 op=UNLOAD Jan 23 18:57:35.593000 audit[2507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.594000 audit: BPF prog-id=90 op=LOAD Jan 23 18:57:35.594000 audit[2507]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.594000 audit: BPF prog-id=91 op=LOAD Jan 23 18:57:35.594000 audit[2507]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.595000 audit: BPF prog-id=91 op=UNLOAD Jan 23 18:57:35.595000 audit[2507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.595000 audit: BPF prog-id=90 op=UNLOAD Jan 23 18:57:35.595000 audit[2507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.595000 audit: BPF prog-id=92 op=LOAD Jan 23 18:57:35.595000 audit[2507]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2477 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437666235643664656430303037303435336661313266353839666339 Jan 23 18:57:35.602000 audit: BPF prog-id=93 op=LOAD Jan 23 18:57:35.604000 audit: BPF prog-id=94 op=LOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=94 op=UNLOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=95 op=LOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=96 op=LOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=96 op=UNLOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=95 op=UNLOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.604000 audit: BPF prog-id=97 op=LOAD Jan 23 18:57:35.604000 audit[2545]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2501 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343563396466396563366435373430663536373465326330333738 Jan 23 18:57:35.660095 containerd[1599]: time="2026-01-23T18:57:35.659929265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59\"" Jan 23 18:57:35.660418 containerd[1599]: time="2026-01-23T18:57:35.660341579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351\"" Jan 23 18:57:35.661495 kubelet[2425]: E0123 18:57:35.661330 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.661495 kubelet[2425]: E0123 18:57:35.661374 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.667198 containerd[1599]: time="2026-01-23T18:57:35.667085184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16c1a764fb2c2c13872406f100fb26e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735\"" Jan 23 18:57:35.668432 containerd[1599]: time="2026-01-23T18:57:35.668187056Z" level=info msg="CreateContainer within sandbox \"d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:57:35.668536 kubelet[2425]: E0123 18:57:35.668462 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:35.671095 containerd[1599]: time="2026-01-23T18:57:35.671039506Z" level=info msg="CreateContainer within sandbox \"381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:57:35.673469 containerd[1599]: time="2026-01-23T18:57:35.673359263Z" level=info msg="CreateContainer within sandbox \"9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:57:35.682926 containerd[1599]: time="2026-01-23T18:57:35.682797693Z" level=info msg="Container 4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:35.685660 containerd[1599]: time="2026-01-23T18:57:35.685194350Z" level=info msg="Container 5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:35.691254 containerd[1599]: time="2026-01-23T18:57:35.691140369Z" level=info msg="Container 5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:35.696567 containerd[1599]: time="2026-01-23T18:57:35.696467013Z" level=info msg="CreateContainer within sandbox \"381268af98e018cf333a22c8a9ca3f2710e8dd632d96eae37aafa91de52f2a59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41\"" Jan 23 18:57:35.697738 containerd[1599]: time="2026-01-23T18:57:35.697375231Z" level=info msg="StartContainer for \"4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41\"" Jan 23 18:57:35.698527 containerd[1599]: time="2026-01-23T18:57:35.698477338Z" level=info msg="connecting to shim 4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41" address="unix:///run/containerd/s/cdb7fb6a40eac03e0f0c607224258f1e6ecaf4406ef8d05000c0e2057589cb58" protocol=ttrpc version=3 Jan 23 18:57:35.703217 containerd[1599]: time="2026-01-23T18:57:35.703146365Z" level=info msg="CreateContainer within sandbox \"9b45c9df9ec6d5740f5674e2c03782f3c41182701e6204f8a6a7a1091d2b3735\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708\"" Jan 23 18:57:35.703689 containerd[1599]: time="2026-01-23T18:57:35.703667706Z" level=info msg="StartContainer for \"5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708\"" Jan 23 18:57:35.705215 containerd[1599]: time="2026-01-23T18:57:35.705167176Z" level=info msg="connecting to shim 5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708" address="unix:///run/containerd/s/369113090188c9e25b65c0d62a673fd94f98a0ebfe760b5db851469f88ecc5ab" protocol=ttrpc version=3 Jan 23 18:57:35.706248 containerd[1599]: time="2026-01-23T18:57:35.706170732Z" level=info msg="CreateContainer within sandbox \"d7fb5d6ded00070453fa12f589fc9282e64de43b7004034ef34fe041bdaba351\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e\"" Jan 23 18:57:35.707321 containerd[1599]: time="2026-01-23T18:57:35.707301841Z" level=info msg="StartContainer for \"5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e\"" Jan 23 18:57:35.710094 containerd[1599]: time="2026-01-23T18:57:35.710068656Z" level=info msg="connecting to shim 5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e" address="unix:///run/containerd/s/df816f77a12df06a7d050708ddfd03d26600bad0a38b77e7e3be592684292aa7" protocol=ttrpc version=3 Jan 23 18:57:35.722034 systemd[1]: Started cri-containerd-4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41.scope - libcontainer container 4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41. Jan 23 18:57:35.737076 systemd[1]: Started cri-containerd-5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e.scope - libcontainer container 5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e. Jan 23 18:57:35.741000 audit: BPF prog-id=98 op=LOAD Jan 23 18:57:35.742000 audit: BPF prog-id=99 op=LOAD Jan 23 18:57:35.742000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.742000 audit: BPF prog-id=99 op=UNLOAD Jan 23 18:57:35.742000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.742000 audit: BPF prog-id=100 op=LOAD Jan 23 18:57:35.742000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.743000 audit: BPF prog-id=101 op=LOAD Jan 23 18:57:35.743000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.743000 audit: BPF prog-id=101 op=UNLOAD Jan 23 18:57:35.743000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.743000 audit: BPF prog-id=100 op=UNLOAD Jan 23 18:57:35.743000 audit[2611]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.743000 audit: BPF prog-id=102 op=LOAD Jan 23 18:57:35.743000 audit[2611]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2503 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363431663333383632633166643337643636353536656364356139 Jan 23 18:57:35.755021 systemd[1]: Started cri-containerd-5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708.scope - libcontainer container 5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708. Jan 23 18:57:35.756000 audit: BPF prog-id=103 op=LOAD Jan 23 18:57:35.757000 audit: BPF prog-id=104 op=LOAD Jan 23 18:57:35.757000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.757000 audit: BPF prog-id=104 op=UNLOAD Jan 23 18:57:35.757000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.758000 audit: BPF prog-id=105 op=LOAD Jan 23 18:57:35.758000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.758000 audit: BPF prog-id=106 op=LOAD Jan 23 18:57:35.758000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.759000 audit: BPF prog-id=106 op=UNLOAD Jan 23 18:57:35.759000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.759000 audit: BPF prog-id=105 op=UNLOAD Jan 23 18:57:35.759000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.759000 audit: BPF prog-id=107 op=LOAD Jan 23 18:57:35.759000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2477 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333964633365613361346439313434336263343537313135396236 Jan 23 18:57:35.783000 audit: BPF prog-id=108 op=LOAD Jan 23 18:57:35.784000 audit: BPF prog-id=109 op=LOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=109 op=UNLOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=110 op=LOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=111 op=LOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=111 op=UNLOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=110 op=UNLOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.784000 audit: BPF prog-id=112 op=LOAD Jan 23 18:57:35.784000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2501 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:35.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643438323863656435363932373838633135373839373134643363 Jan 23 18:57:35.807641 kubelet[2425]: E0123 18:57:35.807542 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:35.810453 containerd[1599]: time="2026-01-23T18:57:35.810288491Z" level=info msg="StartContainer for \"4b641f33862c1fd37d66556ecd5a9e0135fa048a941e346f1a7dc6b9081ebb41\" returns successfully" Jan 23 18:57:35.827216 containerd[1599]: time="2026-01-23T18:57:35.826945369Z" level=info msg="StartContainer for \"5439dc3ea3a4d91443bc4571159b61a96c7b211dbc65352cfe820c24e635260e\" returns successfully" Jan 23 18:57:35.859543 kubelet[2425]: E0123 18:57:35.859309 2425 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:57:35.864138 containerd[1599]: time="2026-01-23T18:57:35.863997043Z" level=info msg="StartContainer for \"5cd4828ced5692788c15789714d3ce5b2408aeefed872bef105902cb17427708\" returns successfully" Jan 23 18:57:36.121171 kubelet[2425]: I0123 18:57:36.121025 2425 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:57:36.580470 kubelet[2425]: E0123 18:57:36.580257 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:36.580470 kubelet[2425]: E0123 18:57:36.580422 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:36.585128 kubelet[2425]: E0123 18:57:36.585099 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:36.585478 kubelet[2425]: E0123 18:57:36.585276 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:36.589119 kubelet[2425]: E0123 18:57:36.589101 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:36.589383 kubelet[2425]: E0123 18:57:36.589366 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:37.080429 kubelet[2425]: E0123 18:57:37.080242 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 18:57:37.166183 kubelet[2425]: I0123 18:57:37.166111 2425 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:57:37.166183 kubelet[2425]: E0123 18:57:37.166141 2425 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 18:57:37.185222 kubelet[2425]: E0123 18:57:37.185063 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.285458 kubelet[2425]: E0123 18:57:37.285310 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.385983 kubelet[2425]: E0123 18:57:37.385527 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.486983 kubelet[2425]: E0123 18:57:37.486737 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.587038 kubelet[2425]: E0123 18:57:37.586997 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.589795 kubelet[2425]: E0123 18:57:37.589622 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:37.589795 kubelet[2425]: E0123 18:57:37.589731 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:37.590159 kubelet[2425]: E0123 18:57:37.590111 2425 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:57:37.590206 kubelet[2425]: E0123 18:57:37.590189 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:37.687779 kubelet[2425]: E0123 18:57:37.687634 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.788859 kubelet[2425]: E0123 18:57:37.788724 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.889622 kubelet[2425]: E0123 18:57:37.889413 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:37.990521 kubelet[2425]: E0123 18:57:37.990313 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:38.091742 kubelet[2425]: E0123 18:57:38.091606 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:38.192395 kubelet[2425]: E0123 18:57:38.192362 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:38.293468 kubelet[2425]: E0123 18:57:38.293281 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:38.394537 kubelet[2425]: E0123 18:57:38.394424 2425 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:57:38.515866 kubelet[2425]: I0123 18:57:38.515754 2425 apiserver.go:52] "Watching apiserver" Jan 23 18:57:38.528654 kubelet[2425]: I0123 18:57:38.528355 2425 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:57:38.531028 kubelet[2425]: I0123 18:57:38.530908 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:38.541796 kubelet[2425]: E0123 18:57:38.541763 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:38.542002 kubelet[2425]: I0123 18:57:38.541949 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:57:38.548040 kubelet[2425]: I0123 18:57:38.547929 2425 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:38.590526 kubelet[2425]: E0123 18:57:38.590397 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:38.590845 kubelet[2425]: E0123 18:57:38.590742 2425 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:39.572370 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-10.scope)... Jan 23 18:57:39.572468 systemd[1]: Reloading... Jan 23 18:57:39.660920 zram_generator::config[2768]: No configuration found. Jan 23 18:57:39.913118 systemd[1]: Reloading finished in 340 ms. Jan 23 18:57:39.956111 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:39.971689 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:57:39.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:39.972152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:39.972205 systemd[1]: kubelet.service: Consumed 1.409s CPU time, 126.4M memory peak. Jan 23 18:57:39.975080 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 23 18:57:39.975140 kernel: audit: type=1131 audit(1769194659.971:388): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:39.977320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:39.977000 audit: BPF prog-id=113 op=LOAD Jan 23 18:57:39.988916 kernel: audit: type=1334 audit(1769194659.977:389): prog-id=113 op=LOAD Jan 23 18:57:39.977000 audit: BPF prog-id=67 op=UNLOAD Jan 23 18:57:39.992416 kernel: audit: type=1334 audit(1769194659.977:390): prog-id=67 op=UNLOAD Jan 23 18:57:39.992463 kernel: audit: type=1334 audit(1769194659.978:391): prog-id=114 op=LOAD Jan 23 18:57:39.978000 audit: BPF prog-id=114 op=LOAD Jan 23 18:57:39.995937 kernel: audit: type=1334 audit(1769194659.978:392): prog-id=64 op=UNLOAD Jan 23 18:57:39.978000 audit: BPF prog-id=64 op=UNLOAD Jan 23 18:57:39.979000 audit: BPF prog-id=115 op=LOAD Jan 23 18:57:40.002640 kernel: audit: type=1334 audit(1769194659.979:393): prog-id=115 op=LOAD Jan 23 18:57:40.002684 kernel: audit: type=1334 audit(1769194659.979:394): prog-id=116 op=LOAD Jan 23 18:57:39.979000 audit: BPF prog-id=116 op=LOAD Jan 23 18:57:39.979000 audit: BPF prog-id=65 op=UNLOAD Jan 23 18:57:40.009536 kernel: audit: type=1334 audit(1769194659.979:395): prog-id=65 op=UNLOAD Jan 23 18:57:40.009604 kernel: audit: type=1334 audit(1769194659.979:396): prog-id=66 op=UNLOAD Jan 23 18:57:39.979000 audit: BPF prog-id=66 op=UNLOAD Jan 23 18:57:39.981000 audit: BPF prog-id=117 op=LOAD Jan 23 18:57:40.016347 kernel: audit: type=1334 audit(1769194659.981:397): prog-id=117 op=LOAD Jan 23 18:57:39.981000 audit: BPF prog-id=76 op=UNLOAD Jan 23 18:57:39.982000 audit: BPF prog-id=118 op=LOAD Jan 23 18:57:39.982000 audit: BPF prog-id=63 op=UNLOAD Jan 23 18:57:39.983000 audit: BPF prog-id=119 op=LOAD Jan 23 18:57:39.983000 audit: BPF prog-id=80 op=UNLOAD Jan 23 18:57:39.983000 audit: BPF prog-id=120 op=LOAD Jan 23 18:57:39.983000 audit: BPF prog-id=121 op=LOAD Jan 23 18:57:39.983000 audit: BPF prog-id=81 op=UNLOAD Jan 23 18:57:39.983000 audit: BPF prog-id=82 op=UNLOAD Jan 23 18:57:39.985000 audit: BPF prog-id=122 op=LOAD Jan 23 18:57:39.985000 audit: BPF prog-id=77 op=UNLOAD Jan 23 18:57:39.985000 audit: BPF prog-id=123 op=LOAD Jan 23 18:57:40.017000 audit: BPF prog-id=124 op=LOAD Jan 23 18:57:40.017000 audit: BPF prog-id=78 op=UNLOAD Jan 23 18:57:40.017000 audit: BPF prog-id=79 op=UNLOAD Jan 23 18:57:40.020000 audit: BPF prog-id=125 op=LOAD Jan 23 18:57:40.020000 audit: BPF prog-id=68 op=UNLOAD Jan 23 18:57:40.020000 audit: BPF prog-id=126 op=LOAD Jan 23 18:57:40.020000 audit: BPF prog-id=127 op=LOAD Jan 23 18:57:40.020000 audit: BPF prog-id=69 op=UNLOAD Jan 23 18:57:40.020000 audit: BPF prog-id=70 op=UNLOAD Jan 23 18:57:40.021000 audit: BPF prog-id=128 op=LOAD Jan 23 18:57:40.021000 audit: BPF prog-id=71 op=UNLOAD Jan 23 18:57:40.021000 audit: BPF prog-id=129 op=LOAD Jan 23 18:57:40.022000 audit: BPF prog-id=130 op=LOAD Jan 23 18:57:40.022000 audit: BPF prog-id=72 op=UNLOAD Jan 23 18:57:40.022000 audit: BPF prog-id=73 op=UNLOAD Jan 23 18:57:40.022000 audit: BPF prog-id=131 op=LOAD Jan 23 18:57:40.023000 audit: BPF prog-id=132 op=LOAD Jan 23 18:57:40.023000 audit: BPF prog-id=74 op=UNLOAD Jan 23 18:57:40.023000 audit: BPF prog-id=75 op=UNLOAD Jan 23 18:57:40.235299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:40.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:40.257478 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:40.338922 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:40.338922 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:40.338922 kubelet[2813]: I0123 18:57:40.338071 2813 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:40.347006 kubelet[2813]: I0123 18:57:40.346936 2813 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:57:40.347006 kubelet[2813]: I0123 18:57:40.346990 2813 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:40.347097 kubelet[2813]: I0123 18:57:40.347018 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:57:40.347097 kubelet[2813]: I0123 18:57:40.347025 2813 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:40.347308 kubelet[2813]: I0123 18:57:40.347238 2813 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:40.348573 kubelet[2813]: I0123 18:57:40.348512 2813 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:57:40.351328 kubelet[2813]: I0123 18:57:40.351300 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:40.358660 kubelet[2813]: I0123 18:57:40.358587 2813 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:40.364419 kubelet[2813]: I0123 18:57:40.364306 2813 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:57:40.364944 kubelet[2813]: I0123 18:57:40.364881 2813 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:40.365096 kubelet[2813]: I0123 18:57:40.364934 2813 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:40.365218 kubelet[2813]: I0123 18:57:40.365103 2813 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:40.365218 kubelet[2813]: I0123 18:57:40.365113 2813 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:57:40.365218 kubelet[2813]: I0123 18:57:40.365136 2813 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:57:40.366743 kubelet[2813]: I0123 18:57:40.366654 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:40.367221 kubelet[2813]: I0123 18:57:40.367138 2813 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:57:40.367221 kubelet[2813]: I0123 18:57:40.367201 2813 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:40.367277 kubelet[2813]: I0123 18:57:40.367230 2813 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:57:40.367277 kubelet[2813]: I0123 18:57:40.367254 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:40.369258 kubelet[2813]: I0123 18:57:40.369137 2813 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:57:40.371923 kubelet[2813]: I0123 18:57:40.371742 2813 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:40.372151 kubelet[2813]: I0123 18:57:40.372140 2813 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:57:40.379462 kubelet[2813]: I0123 18:57:40.379410 2813 server.go:1262] "Started kubelet" Jan 23 18:57:40.380998 kubelet[2813]: I0123 18:57:40.380090 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:40.380998 kubelet[2813]: I0123 18:57:40.380116 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:40.380998 kubelet[2813]: I0123 18:57:40.380154 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:57:40.380998 kubelet[2813]: I0123 18:57:40.380345 2813 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:40.381120 kubelet[2813]: I0123 18:57:40.381024 2813 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:57:40.384992 kubelet[2813]: I0123 18:57:40.384974 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:40.385568 kubelet[2813]: I0123 18:57:40.385497 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:40.387294 kubelet[2813]: I0123 18:57:40.387244 2813 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:57:40.389603 kubelet[2813]: I0123 18:57:40.389452 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:57:40.390903 kubelet[2813]: I0123 18:57:40.389738 2813 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:57:40.393473 kubelet[2813]: E0123 18:57:40.393402 2813 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:57:40.395613 kubelet[2813]: I0123 18:57:40.395064 2813 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:40.395613 kubelet[2813]: I0123 18:57:40.395143 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:40.397649 kubelet[2813]: I0123 18:57:40.397558 2813 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:40.424327 kubelet[2813]: I0123 18:57:40.424215 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:40.428720 kubelet[2813]: I0123 18:57:40.428666 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:40.428720 kubelet[2813]: I0123 18:57:40.428716 2813 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:57:40.428927 kubelet[2813]: I0123 18:57:40.428736 2813 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:57:40.428927 kubelet[2813]: E0123 18:57:40.428779 2813 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:40.442427 kubelet[2813]: I0123 18:57:40.442326 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:40.442427 kubelet[2813]: I0123 18:57:40.442388 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:40.442427 kubelet[2813]: I0123 18:57:40.442405 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:40.442537 kubelet[2813]: I0123 18:57:40.442513 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:57:40.442537 kubelet[2813]: I0123 18:57:40.442523 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:57:40.442537 kubelet[2813]: I0123 18:57:40.442537 2813 policy_none.go:49] "None policy: Start" Jan 23 18:57:40.442589 kubelet[2813]: I0123 18:57:40.442546 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:57:40.442589 kubelet[2813]: I0123 18:57:40.442556 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:57:40.443175 kubelet[2813]: I0123 18:57:40.442632 2813 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 18:57:40.443175 kubelet[2813]: I0123 18:57:40.442645 2813 policy_none.go:47] "Start" Jan 23 18:57:40.449932 kubelet[2813]: E0123 18:57:40.449905 2813 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:40.450073 kubelet[2813]: I0123 18:57:40.450045 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:40.450273 kubelet[2813]: I0123 18:57:40.450056 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:40.450309 kubelet[2813]: I0123 18:57:40.450304 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:40.452018 kubelet[2813]: E0123 18:57:40.451759 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:40.530965 kubelet[2813]: I0123 18:57:40.530619 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.530965 kubelet[2813]: I0123 18:57:40.530634 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:40.532299 kubelet[2813]: I0123 18:57:40.531069 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:57:40.539620 kubelet[2813]: E0123 18:57:40.539432 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.540544 kubelet[2813]: E0123 18:57:40.540519 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 18:57:40.540544 kubelet[2813]: E0123 18:57:40.540535 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:40.562297 kubelet[2813]: I0123 18:57:40.562087 2813 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:57:40.571130 kubelet[2813]: I0123 18:57:40.570927 2813 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 18:57:40.571130 kubelet[2813]: I0123 18:57:40.571015 2813 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:57:40.590748 kubelet[2813]: I0123 18:57:40.590534 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:40.590748 kubelet[2813]: I0123 18:57:40.590704 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.590748 kubelet[2813]: I0123 18:57:40.590727 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.590984 kubelet[2813]: I0123 18:57:40.590947 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.590984 kubelet[2813]: I0123 18:57:40.590970 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:57:40.591025 kubelet[2813]: I0123 18:57:40.590988 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:40.591025 kubelet[2813]: I0123 18:57:40.591006 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.591189 kubelet[2813]: I0123 18:57:40.591100 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:40.591189 kubelet[2813]: I0123 18:57:40.591165 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16c1a764fb2c2c13872406f100fb26e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16c1a764fb2c2c13872406f100fb26e2\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:40.840433 kubelet[2813]: E0123 18:57:40.840144 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:40.841786 kubelet[2813]: E0123 18:57:40.841699 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:40.842210 kubelet[2813]: E0123 18:57:40.842183 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:41.369115 kubelet[2813]: I0123 18:57:41.369064 2813 apiserver.go:52] "Watching apiserver" Jan 23 18:57:41.396616 kubelet[2813]: I0123 18:57:41.396555 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:57:41.436897 kubelet[2813]: I0123 18:57:41.436368 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.436354171 podStartE2EDuration="3.436354171s" podCreationTimestamp="2026-01-23 18:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:41.434934551 +0000 UTC m=+1.168324702" watchObservedRunningTime="2026-01-23 18:57:41.436354171 +0000 UTC m=+1.169744332" Jan 23 18:57:41.438581 kubelet[2813]: I0123 18:57:41.438500 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.438491338 podStartE2EDuration="3.438491338s" podCreationTimestamp="2026-01-23 18:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:41.424687145 +0000 UTC m=+1.158077305" watchObservedRunningTime="2026-01-23 18:57:41.438491338 +0000 UTC m=+1.171881509" Jan 23 18:57:41.446343 kubelet[2813]: I0123 18:57:41.446073 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.446063759 podStartE2EDuration="3.446063759s" podCreationTimestamp="2026-01-23 18:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:41.445569081 +0000 UTC m=+1.178959242" watchObservedRunningTime="2026-01-23 18:57:41.446063759 +0000 UTC m=+1.179453920" Jan 23 18:57:41.446942 kubelet[2813]: E0123 18:57:41.446713 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:41.447473 kubelet[2813]: I0123 18:57:41.447390 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:41.448073 kubelet[2813]: I0123 18:57:41.447922 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:41.456320 kubelet[2813]: E0123 18:57:41.456180 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:57:41.456526 kubelet[2813]: E0123 18:57:41.456453 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:41.459081 kubelet[2813]: E0123 18:57:41.459053 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 18:57:41.459262 kubelet[2813]: E0123 18:57:41.459216 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:42.449900 kubelet[2813]: E0123 18:57:42.449533 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:42.450877 kubelet[2813]: E0123 18:57:42.450187 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:42.451643 kubelet[2813]: E0123 18:57:42.451411 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:43.452439 kubelet[2813]: E0123 18:57:43.452349 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:44.822173 kubelet[2813]: I0123 18:57:44.822065 2813 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:57:44.822732 containerd[1599]: time="2026-01-23T18:57:44.822652226Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:57:44.823319 kubelet[2813]: I0123 18:57:44.823047 2813 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:57:45.713564 systemd[1]: Created slice kubepods-besteffort-podf6112a4d_731c_411a_a8f0_522e8a19f57f.slice - libcontainer container kubepods-besteffort-podf6112a4d_731c_411a_a8f0_522e8a19f57f.slice. Jan 23 18:57:45.825959 kubelet[2813]: I0123 18:57:45.825721 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6112a4d-731c-411a-a8f0-522e8a19f57f-lib-modules\") pod \"kube-proxy-2bqq9\" (UID: \"f6112a4d-731c-411a-a8f0-522e8a19f57f\") " pod="kube-system/kube-proxy-2bqq9" Jan 23 18:57:45.825959 kubelet[2813]: I0123 18:57:45.825785 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mn2l\" (UniqueName: \"kubernetes.io/projected/f6112a4d-731c-411a-a8f0-522e8a19f57f-kube-api-access-7mn2l\") pod \"kube-proxy-2bqq9\" (UID: \"f6112a4d-731c-411a-a8f0-522e8a19f57f\") " pod="kube-system/kube-proxy-2bqq9" Jan 23 18:57:45.825959 kubelet[2813]: I0123 18:57:45.825878 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6112a4d-731c-411a-a8f0-522e8a19f57f-kube-proxy\") pod \"kube-proxy-2bqq9\" (UID: \"f6112a4d-731c-411a-a8f0-522e8a19f57f\") " pod="kube-system/kube-proxy-2bqq9" Jan 23 18:57:45.825959 kubelet[2813]: I0123 18:57:45.825896 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6112a4d-731c-411a-a8f0-522e8a19f57f-xtables-lock\") pod \"kube-proxy-2bqq9\" (UID: \"f6112a4d-731c-411a-a8f0-522e8a19f57f\") " pod="kube-system/kube-proxy-2bqq9" Jan 23 18:57:45.981965 systemd[1]: Created slice kubepods-besteffort-pod9819e91e_ad48_4f77_9703_cd8aa5b3f1f1.slice - libcontainer container kubepods-besteffort-pod9819e91e_ad48_4f77_9703_cd8aa5b3f1f1.slice. Jan 23 18:57:46.027632 kubelet[2813]: I0123 18:57:46.027464 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzg4\" (UniqueName: \"kubernetes.io/projected/9819e91e-ad48-4f77-9703-cd8aa5b3f1f1-kube-api-access-zdzg4\") pod \"tigera-operator-65cdcdfd6d-vzlvx\" (UID: \"9819e91e-ad48-4f77-9703-cd8aa5b3f1f1\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vzlvx" Jan 23 18:57:46.027632 kubelet[2813]: I0123 18:57:46.027592 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9819e91e-ad48-4f77-9703-cd8aa5b3f1f1-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vzlvx\" (UID: \"9819e91e-ad48-4f77-9703-cd8aa5b3f1f1\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vzlvx" Jan 23 18:57:46.030533 kubelet[2813]: E0123 18:57:46.030476 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:46.031766 containerd[1599]: time="2026-01-23T18:57:46.031483378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bqq9,Uid:f6112a4d-731c-411a-a8f0-522e8a19f57f,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:46.092498 containerd[1599]: time="2026-01-23T18:57:46.092402504Z" level=info msg="connecting to shim 495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c" address="unix:///run/containerd/s/ec0c2b2fd91614e67f3dc89e862b73ea79fef2733b6525cc1d2346697df97ea2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:46.175289 systemd[1]: Started cri-containerd-495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c.scope - libcontainer container 495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c. Jan 23 18:57:46.193000 audit: BPF prog-id=133 op=LOAD Jan 23 18:57:46.196920 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 18:57:46.197143 kernel: audit: type=1334 audit(1769194666.193:430): prog-id=133 op=LOAD Jan 23 18:57:46.195000 audit: BPF prog-id=134 op=LOAD Jan 23 18:57:46.204203 kernel: audit: type=1334 audit(1769194666.195:431): prog-id=134 op=LOAD Jan 23 18:57:46.204295 kernel: audit: type=1300 audit(1769194666.195:431): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.195000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.218677 kernel: audit: type=1327 audit(1769194666.195:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.195000 audit: BPF prog-id=134 op=UNLOAD Jan 23 18:57:46.235519 kernel: audit: type=1334 audit(1769194666.195:432): prog-id=134 op=UNLOAD Jan 23 18:57:46.195000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.248927 kernel: audit: type=1300 audit(1769194666.195:432): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.248960 kernel: audit: type=1327 audit(1769194666.195:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.258731 containerd[1599]: time="2026-01-23T18:57:46.258650390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bqq9,Uid:f6112a4d-731c-411a-a8f0-522e8a19f57f,Namespace:kube-system,Attempt:0,} returns sandbox id \"495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c\"" Jan 23 18:57:46.260370 kubelet[2813]: E0123 18:57:46.260210 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:46.196000 audit: BPF prog-id=135 op=LOAD Jan 23 18:57:46.266094 kernel: audit: type=1334 audit(1769194666.196:433): prog-id=135 op=LOAD Jan 23 18:57:46.266340 kernel: audit: type=1300 audit(1769194666.196:433): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.196000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.270267 containerd[1599]: time="2026-01-23T18:57:46.270119781Z" level=info msg="CreateContainer within sandbox \"495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:57:46.281265 kernel: audit: type=1327 audit(1769194666.196:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.196000 audit: BPF prog-id=136 op=LOAD Jan 23 18:57:46.196000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.196000 audit: BPF prog-id=136 op=UNLOAD Jan 23 18:57:46.196000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.196000 audit: BPF prog-id=135 op=UNLOAD Jan 23 18:57:46.196000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.196000 audit: BPF prog-id=137 op=LOAD Jan 23 18:57:46.196000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2875 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439356433626661363133633238663039353431316431616233376230 Jan 23 18:57:46.296729 containerd[1599]: time="2026-01-23T18:57:46.296619228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vzlvx,Uid:9819e91e-ad48-4f77-9703-cd8aa5b3f1f1,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:57:46.296927 containerd[1599]: time="2026-01-23T18:57:46.296905584Z" level=info msg="Container 950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:46.307796 containerd[1599]: time="2026-01-23T18:57:46.307722961Z" level=info msg="CreateContainer within sandbox \"495d3bfa613c28f095411d1ab37b00fde730d107779dae06ce47cb87fcfc552c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513\"" Jan 23 18:57:46.311778 containerd[1599]: time="2026-01-23T18:57:46.311604498Z" level=info msg="StartContainer for \"950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513\"" Jan 23 18:57:46.316408 containerd[1599]: time="2026-01-23T18:57:46.316242785Z" level=info msg="connecting to shim 950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513" address="unix:///run/containerd/s/ec0c2b2fd91614e67f3dc89e862b73ea79fef2733b6525cc1d2346697df97ea2" protocol=ttrpc version=3 Jan 23 18:57:46.331895 containerd[1599]: time="2026-01-23T18:57:46.330782060Z" level=info msg="connecting to shim 7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7" address="unix:///run/containerd/s/5031d1d4fd2df6e87b141291642136afd27cb636c6a729412a11458984838ade" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:46.356067 systemd[1]: Started cri-containerd-950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513.scope - libcontainer container 950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513. Jan 23 18:57:46.382009 systemd[1]: Started cri-containerd-7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7.scope - libcontainer container 7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7. Jan 23 18:57:46.399000 audit: BPF prog-id=138 op=LOAD Jan 23 18:57:46.401000 audit: BPF prog-id=139 op=LOAD Jan 23 18:57:46.401000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.401000 audit: BPF prog-id=139 op=UNLOAD Jan 23 18:57:46.401000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.402000 audit: BPF prog-id=140 op=LOAD Jan 23 18:57:46.402000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.402000 audit: BPF prog-id=141 op=LOAD Jan 23 18:57:46.402000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.403000 audit: BPF prog-id=141 op=UNLOAD Jan 23 18:57:46.403000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.403000 audit: BPF prog-id=140 op=UNLOAD Jan 23 18:57:46.403000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.403000 audit: BPF prog-id=142 op=LOAD Jan 23 18:57:46.403000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2928 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762643265306132393333323863343130303533353433393532633461 Jan 23 18:57:46.419000 audit: BPF prog-id=143 op=LOAD Jan 23 18:57:46.419000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2875 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303739366137343139363537383839616434333061666438303966 Jan 23 18:57:46.419000 audit: BPF prog-id=144 op=LOAD Jan 23 18:57:46.419000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2875 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303739366137343139363537383839616434333061666438303966 Jan 23 18:57:46.419000 audit: BPF prog-id=144 op=UNLOAD Jan 23 18:57:46.419000 audit[2913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303739366137343139363537383839616434333061666438303966 Jan 23 18:57:46.419000 audit: BPF prog-id=143 op=UNLOAD Jan 23 18:57:46.419000 audit[2913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2875 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303739366137343139363537383839616434333061666438303966 Jan 23 18:57:46.419000 audit: BPF prog-id=145 op=LOAD Jan 23 18:57:46.419000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2875 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303739366137343139363537383839616434333061666438303966 Jan 23 18:57:46.465345 containerd[1599]: time="2026-01-23T18:57:46.465260429Z" level=info msg="StartContainer for \"950796a7419657889ad430afd809faa08b142092c24c8e6c544a6566631c8513\" returns successfully" Jan 23 18:57:46.478649 containerd[1599]: time="2026-01-23T18:57:46.478435122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vzlvx,Uid:9819e91e-ad48-4f77-9703-cd8aa5b3f1f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7\"" Jan 23 18:57:46.481961 containerd[1599]: time="2026-01-23T18:57:46.481933262Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:57:46.747000 audit[3023]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.747000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff26259060 a2=0 a3=7fff2625904c items=0 ppid=2952 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:57:46.748000 audit[3024]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:46.748000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe411a3360 a2=0 a3=7ffe411a334c items=0 ppid=2952 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:57:46.751000 audit[3026]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:46.751000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4e450d20 a2=0 a3=7ffd4e450d0c items=0 ppid=2952 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:57:46.752000 audit[3027]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.752000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde2ccf960 a2=0 a3=7ffde2ccf94c items=0 ppid=2952 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:57:46.754000 audit[3029]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:46.754000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7f6e0d30 a2=0 a3=7ffc7f6e0d1c items=0 ppid=2952 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:57:46.759000 audit[3031]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.759000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8a1aa260 a2=0 a3=7ffe8a1aa24c items=0 ppid=2952 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:57:46.855000 audit[3032]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.855000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe72f635b0 a2=0 a3=7ffe72f6359c items=0 ppid=2952 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:57:46.862000 audit[3034]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.862000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd1f448bd0 a2=0 a3=7ffd1f448bbc items=0 ppid=2952 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 23 18:57:46.870000 audit[3037]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.870000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd4ebfd490 a2=0 a3=7ffd4ebfd47c items=0 ppid=2952 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 18:57:46.873000 audit[3038]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.873000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf482c0e0 a2=0 a3=7ffdf482c0cc items=0 ppid=2952 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:57:46.879000 audit[3040]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.879000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2f99f2b0 a2=0 a3=7ffe2f99f29c items=0 ppid=2952 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:57:46.882000 audit[3041]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.882000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffee4eb5f0 a2=0 a3=7fffee4eb5dc items=0 ppid=2952 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:57:46.888000 audit[3043]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.888000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe546208d0 a2=0 a3=7ffe546208bc items=0 ppid=2952 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:46.896000 audit[3046]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.896000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdbe6fb380 a2=0 a3=7ffdbe6fb36c items=0 ppid=2952 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:46.899000 audit[3047]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.899000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc52e8b8a0 a2=0 a3=7ffc52e8b88c items=0 ppid=2952 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:57:46.904000 audit[3049]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.904000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc19b3f8f0 a2=0 a3=7ffc19b3f8dc items=0 ppid=2952 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:57:46.907000 audit[3050]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.907000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeed875020 a2=0 a3=7ffeed87500c items=0 ppid=2952 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:57:46.913000 audit[3052]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.913000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff214bd7f0 a2=0 a3=7fff214bd7dc items=0 ppid=2952 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 23 18:57:46.922000 audit[3055]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.922000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecf79acc0 a2=0 a3=7ffecf79acac items=0 ppid=2952 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 18:57:46.931000 audit[3058]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.931000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe368c9a40 a2=0 a3=7ffe368c9a2c items=0 ppid=2952 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 18:57:46.935000 audit[3059]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.935000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc45b6150 a2=0 a3=7ffdc45b613c items=0 ppid=2952 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:57:46.944000 audit[3061]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.944000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd050e70d0 a2=0 a3=7ffd050e70bc items=0 ppid=2952 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:46.953000 audit[3064]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.953000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf0434500 a2=0 a3=7ffcf04344ec items=0 ppid=2952 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:46.956000 audit[3065]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.956000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1718a040 a2=0 a3=7ffc1718a02c items=0 ppid=2952 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:57:46.962000 audit[3067]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:57:46.962000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd7e374dc0 a2=0 a3=7ffd7e374dac items=0 ppid=2952 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:57:46.999000 audit[3073]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:46.999000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe79a57b10 a2=0 a3=7ffe79a57afc items=0 ppid=2952 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:46.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:47.009000 audit[3073]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:47.009000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe79a57b10 a2=0 a3=7ffe79a57afc items=0 ppid=2952 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:47.013000 audit[3078]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.013000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd85d89f10 a2=0 a3=7ffd85d89efc items=0 ppid=2952 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:57:47.019000 audit[3080]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.019000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe0b371670 a2=0 a3=7ffe0b37165c items=0 ppid=2952 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 18:57:47.029000 audit[3083]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.029000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcbeeaba80 a2=0 a3=7ffcbeeaba6c items=0 ppid=2952 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 23 18:57:47.032000 audit[3084]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.032000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaa881af0 a2=0 a3=7ffeaa881adc items=0 ppid=2952 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:57:47.037000 audit[3086]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.037000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea02b1280 a2=0 a3=7ffea02b126c items=0 ppid=2952 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:57:47.040000 audit[3087]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.040000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf9ee1630 a2=0 a3=7ffdf9ee161c items=0 ppid=2952 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:57:47.045000 audit[3089]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.045000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9028df60 a2=0 a3=7ffc9028df4c items=0 ppid=2952 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:47.055000 audit[3092]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.055000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff81550450 a2=0 a3=7fff8155043c items=0 ppid=2952 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:47.058000 audit[3093]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.058000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b6f2ec0 a2=0 a3=7ffe9b6f2eac items=0 ppid=2952 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:57:47.064000 audit[3095]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.064000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefe8f4bf0 a2=0 a3=7ffefe8f4bdc items=0 ppid=2952 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:57:47.067000 audit[3096]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.067000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf83c5f80 a2=0 a3=7ffdf83c5f6c items=0 ppid=2952 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:57:47.072000 audit[3098]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.072000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcec944c0 a2=0 a3=7fffcec944ac items=0 ppid=2952 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 18:57:47.081000 audit[3101]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.081000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff788482f0 a2=0 a3=7fff788482dc items=0 ppid=2952 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 18:57:47.089000 audit[3104]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.089000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc515cd180 a2=0 a3=7ffc515cd16c items=0 ppid=2952 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 23 18:57:47.092000 audit[3105]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.092000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd64d0c6f0 a2=0 a3=7ffd64d0c6dc items=0 ppid=2952 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:57:47.098000 audit[3107]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.098000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc9fa19200 a2=0 a3=7ffc9fa191ec items=0 ppid=2952 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:47.107000 audit[3110]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.107000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcfab62ca0 a2=0 a3=7ffcfab62c8c items=0 ppid=2952 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:57:47.109000 audit[3111]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.109000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfaff3e60 a2=0 a3=7ffcfaff3e4c items=0 ppid=2952 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:57:47.115000 audit[3113]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.115000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffff12ccb30 a2=0 a3=7ffff12ccb1c items=0 ppid=2952 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:57:47.118000 audit[3114]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.118000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb4162c70 a2=0 a3=7ffdb4162c5c items=0 ppid=2952 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:57:47.124000 audit[3116]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.124000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd3c967bc0 a2=0 a3=7ffd3c967bac items=0 ppid=2952 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:57:47.133000 audit[3119]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:57:47.133000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8b83c140 a2=0 a3=7fff8b83c12c items=0 ppid=2952 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:57:47.140000 audit[3121]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:57:47.140000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdb6a14f00 a2=0 a3=7ffdb6a14eec items=0 ppid=2952 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.140000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:47.140000 audit[3121]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:57:47.140000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdb6a14f00 a2=0 a3=7ffdb6a14eec items=0 ppid=2952 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:47.140000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:47.479898 kubelet[2813]: E0123 18:57:47.479694 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:47.869904 kubelet[2813]: E0123 18:57:47.869719 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:47.887319 kubelet[2813]: I0123 18:57:47.887260 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2bqq9" podStartSLOduration=2.887242775 podStartE2EDuration="2.887242775s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:47.491067518 +0000 UTC m=+7.224457679" watchObservedRunningTime="2026-01-23 18:57:47.887242775 +0000 UTC m=+7.620632946" Jan 23 18:57:48.004633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152975035.mount: Deactivated successfully. Jan 23 18:57:48.484642 kubelet[2813]: E0123 18:57:48.484465 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:48.484642 kubelet[2813]: E0123 18:57:48.484512 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:49.608230 update_engine[1584]: I20260123 18:57:49.608047 1584 update_attempter.cc:509] Updating boot flags... Jan 23 18:57:49.916179 containerd[1599]: time="2026-01-23T18:57:49.915786899Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:49.917233 containerd[1599]: time="2026-01-23T18:57:49.917161281Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 18:57:49.918946 containerd[1599]: time="2026-01-23T18:57:49.918789866Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:49.921428 containerd[1599]: time="2026-01-23T18:57:49.921383705Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:49.921855 containerd[1599]: time="2026-01-23T18:57:49.921762638Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.439667702s" Jan 23 18:57:49.922039 containerd[1599]: time="2026-01-23T18:57:49.921975376Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:57:49.933372 containerd[1599]: time="2026-01-23T18:57:49.933200054Z" level=info msg="CreateContainer within sandbox \"7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:57:49.954686 containerd[1599]: time="2026-01-23T18:57:49.954658167Z" level=info msg="Container 474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:49.957577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840620844.mount: Deactivated successfully. Jan 23 18:57:49.963917 containerd[1599]: time="2026-01-23T18:57:49.963769977Z" level=info msg="CreateContainer within sandbox \"7bd2e0a293328c410053543952c4ad7d11697dbd6cf6d887b9381cfa2b5f4ab7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd\"" Jan 23 18:57:49.965087 containerd[1599]: time="2026-01-23T18:57:49.964464418Z" level=info msg="StartContainer for \"474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd\"" Jan 23 18:57:49.965940 containerd[1599]: time="2026-01-23T18:57:49.965619950Z" level=info msg="connecting to shim 474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd" address="unix:///run/containerd/s/5031d1d4fd2df6e87b141291642136afd27cb636c6a729412a11458984838ade" protocol=ttrpc version=3 Jan 23 18:57:49.999073 systemd[1]: Started cri-containerd-474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd.scope - libcontainer container 474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd. Jan 23 18:57:50.014000 audit: BPF prog-id=146 op=LOAD Jan 23 18:57:50.014000 audit: BPF prog-id=147 op=LOAD Jan 23 18:57:50.014000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=147 op=UNLOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=148 op=LOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=149 op=LOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=149 op=UNLOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=148 op=UNLOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.015000 audit: BPF prog-id=150 op=LOAD Jan 23 18:57:50.015000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2928 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:50.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437346561303334646364393562353562653733346633343664623137 Jan 23 18:57:50.044303 containerd[1599]: time="2026-01-23T18:57:50.044273637Z" level=info msg="StartContainer for \"474ea034dcd95b55be734f346db1714e2e6837e451e97f52c53726c25745eedd\" returns successfully" Jan 23 18:57:51.093728 kubelet[2813]: E0123 18:57:51.093642 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:51.103882 kubelet[2813]: I0123 18:57:51.103619 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vzlvx" podStartSLOduration=2.657731932 podStartE2EDuration="6.103607878s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="2026-01-23 18:57:46.481532114 +0000 UTC m=+6.214922276" lastFinishedPulling="2026-01-23 18:57:49.927408061 +0000 UTC m=+9.660798222" observedRunningTime="2026-01-23 18:57:50.503547693 +0000 UTC m=+10.236937854" watchObservedRunningTime="2026-01-23 18:57:51.103607878 +0000 UTC m=+10.836998039" Jan 23 18:57:51.615992 kubelet[2813]: E0123 18:57:51.615717 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:57:55.722171 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 18:57:55.722295 kernel: audit: type=1106 audit(1769194675.705:510): pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:55.705000 audit[1844]: USER_END pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:55.707544 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:55.732878 sshd[1843]: Connection closed by 10.0.0.1 port 53668 Jan 23 18:57:55.735017 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:55.705000 audit[1844]: CRED_DISP pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:55.749900 kernel: audit: type=1104 audit(1769194675.705:511): pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:57:55.754318 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:53668.service: Deactivated successfully. Jan 23 18:57:55.735000 audit[1839]: USER_END pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:55.760797 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:57:55.761373 systemd[1]: session-10.scope: Consumed 5.511s CPU time, 214.7M memory peak. Jan 23 18:57:55.763280 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:57:55.766872 systemd-logind[1577]: Removed session 10. Jan 23 18:57:55.735000 audit[1839]: CRED_DISP pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:55.787627 kernel: audit: type=1106 audit(1769194675.735:512): pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:55.787687 kernel: audit: type=1104 audit(1769194675.735:513): pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:57:55.788091 kernel: audit: type=1131 audit(1769194675.753:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:55.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.151:22-10.0.0.1:53668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:57:56.349000 audit[3233]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:56.349000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7b69a460 a2=0 a3=7ffc7b69a44c items=0 ppid=2952 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:56.375511 kernel: audit: type=1325 audit(1769194676.349:515): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:56.375573 kernel: audit: type=1300 audit(1769194676.349:515): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc7b69a460 a2=0 a3=7ffc7b69a44c items=0 ppid=2952 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:56.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:56.379000 audit[3233]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:56.391030 kernel: audit: type=1327 audit(1769194676.349:515): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:56.391081 kernel: audit: type=1325 audit(1769194676.379:516): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:56.379000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7b69a460 a2=0 a3=0 items=0 ppid=2952 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:56.406659 kernel: audit: type=1300 audit(1769194676.379:516): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7b69a460 a2=0 a3=0 items=0 ppid=2952 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:56.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:57.411000 audit[3236]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:57.411000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe0043ae00 a2=0 a3=7ffe0043adec items=0 ppid=2952 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:57.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:57.429000 audit[3236]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:57.429000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0043ae00 a2=0 a3=0 items=0 ppid=2952 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:57.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:58.473000 audit[3238]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:58.473000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffff3c22a70 a2=0 a3=7ffff3c22a5c items=0 ppid=2952 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:58.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:58.483000 audit[3238]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:58.483000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffff3c22a70 a2=0 a3=0 items=0 ppid=2952 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:58.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:59.767000 audit[3240]: NETFILTER_CFG table=filter:111 family=2 entries=21 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:59.767000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3ae590c0 a2=0 a3=7fff3ae590ac items=0 ppid=2952 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:59.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:59.775000 audit[3240]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:57:59.775000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3ae590c0 a2=0 a3=0 items=0 ppid=2952 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:57:59.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:57:59.814187 systemd[1]: Created slice kubepods-besteffort-poda255e702_bae4_49a5_9196_45b8394a2e8f.slice - libcontainer container kubepods-besteffort-poda255e702_bae4_49a5_9196_45b8394a2e8f.slice. Jan 23 18:57:59.835337 kubelet[2813]: I0123 18:57:59.835244 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a255e702-bae4-49a5-9196-45b8394a2e8f-typha-certs\") pod \"calico-typha-6dbfc7b8b7-b8bb4\" (UID: \"a255e702-bae4-49a5-9196-45b8394a2e8f\") " pod="calico-system/calico-typha-6dbfc7b8b7-b8bb4" Jan 23 18:57:59.836000 kubelet[2813]: I0123 18:57:59.835907 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbqr\" (UniqueName: \"kubernetes.io/projected/a255e702-bae4-49a5-9196-45b8394a2e8f-kube-api-access-pcbqr\") pod \"calico-typha-6dbfc7b8b7-b8bb4\" (UID: \"a255e702-bae4-49a5-9196-45b8394a2e8f\") " pod="calico-system/calico-typha-6dbfc7b8b7-b8bb4" Jan 23 18:57:59.838038 kubelet[2813]: I0123 18:57:59.835938 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a255e702-bae4-49a5-9196-45b8394a2e8f-tigera-ca-bundle\") pod \"calico-typha-6dbfc7b8b7-b8bb4\" (UID: \"a255e702-bae4-49a5-9196-45b8394a2e8f\") " pod="calico-system/calico-typha-6dbfc7b8b7-b8bb4" Jan 23 18:57:59.991056 systemd[1]: Created slice kubepods-besteffort-podc36af473_ff65_4dab_8275_c153be5f5297.slice - libcontainer container kubepods-besteffort-podc36af473_ff65_4dab_8275_c153be5f5297.slice. Jan 23 18:58:00.040464 kubelet[2813]: I0123 18:58:00.040253 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-flexvol-driver-host\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040464 kubelet[2813]: I0123 18:58:00.040320 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c36af473-ff65-4dab-8275-c153be5f5297-tigera-ca-bundle\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040464 kubelet[2813]: I0123 18:58:00.040337 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-var-run-calico\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040464 kubelet[2813]: I0123 18:58:00.040350 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w969z\" (UniqueName: \"kubernetes.io/projected/c36af473-ff65-4dab-8275-c153be5f5297-kube-api-access-w969z\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040464 kubelet[2813]: I0123 18:58:00.040365 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-cni-log-dir\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040766 kubelet[2813]: I0123 18:58:00.040380 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-var-lib-calico\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040766 kubelet[2813]: I0123 18:58:00.040393 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-xtables-lock\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040766 kubelet[2813]: I0123 18:58:00.040406 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-cni-bin-dir\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040766 kubelet[2813]: I0123 18:58:00.040420 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-policysync\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.040766 kubelet[2813]: I0123 18:58:00.040433 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-cni-net-dir\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.041009 kubelet[2813]: I0123 18:58:00.040445 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c36af473-ff65-4dab-8275-c153be5f5297-lib-modules\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.041009 kubelet[2813]: I0123 18:58:00.040459 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c36af473-ff65-4dab-8275-c153be5f5297-node-certs\") pod \"calico-node-52f5c\" (UID: \"c36af473-ff65-4dab-8275-c153be5f5297\") " pod="calico-system/calico-node-52f5c" Jan 23 18:58:00.125714 kubelet[2813]: E0123 18:58:00.125563 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:00.127080 containerd[1599]: time="2026-01-23T18:58:00.126980439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbfc7b8b7-b8bb4,Uid:a255e702-bae4-49a5-9196-45b8394a2e8f,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:00.156067 kubelet[2813]: E0123 18:58:00.153972 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.156067 kubelet[2813]: W0123 18:58:00.156033 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.156067 kubelet[2813]: E0123 18:58:00.156057 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.166095 containerd[1599]: time="2026-01-23T18:58:00.165989584Z" level=info msg="connecting to shim 3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d" address="unix:///run/containerd/s/7362ed72fe559329be8f3e64cc4ff841c12551332bdcd0e23c064ca665243618" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:00.179421 kubelet[2813]: E0123 18:58:00.179402 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.179613 kubelet[2813]: W0123 18:58:00.179554 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.179613 kubelet[2813]: E0123 18:58:00.179581 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.191408 kubelet[2813]: E0123 18:58:00.191143 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:00.235973 kubelet[2813]: E0123 18:58:00.235941 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.235973 kubelet[2813]: W0123 18:58:00.235959 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.235973 kubelet[2813]: E0123 18:58:00.235975 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.236190 kubelet[2813]: E0123 18:58:00.236143 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.236190 kubelet[2813]: W0123 18:58:00.236151 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.236190 kubelet[2813]: E0123 18:58:00.236159 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.236496 systemd[1]: Started cri-containerd-3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d.scope - libcontainer container 3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d. Jan 23 18:58:00.237255 kubelet[2813]: E0123 18:58:00.237114 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.237255 kubelet[2813]: W0123 18:58:00.237140 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.237255 kubelet[2813]: E0123 18:58:00.237150 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.238307 kubelet[2813]: E0123 18:58:00.238046 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.238307 kubelet[2813]: W0123 18:58:00.238106 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.238307 kubelet[2813]: E0123 18:58:00.238116 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.239228 kubelet[2813]: E0123 18:58:00.239064 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.239228 kubelet[2813]: W0123 18:58:00.239121 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.239228 kubelet[2813]: E0123 18:58:00.239132 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.239747 kubelet[2813]: E0123 18:58:00.239592 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.239747 kubelet[2813]: W0123 18:58:00.239601 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.239747 kubelet[2813]: E0123 18:58:00.239610 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.240786 kubelet[2813]: E0123 18:58:00.240676 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.240786 kubelet[2813]: W0123 18:58:00.240724 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.240786 kubelet[2813]: E0123 18:58:00.240734 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.242144 kubelet[2813]: E0123 18:58:00.241969 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.242144 kubelet[2813]: W0123 18:58:00.242018 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.242144 kubelet[2813]: E0123 18:58:00.242028 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.242933 kubelet[2813]: E0123 18:58:00.242778 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.242933 kubelet[2813]: W0123 18:58:00.242925 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.242991 kubelet[2813]: E0123 18:58:00.242936 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.243530 kubelet[2813]: E0123 18:58:00.243357 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.243530 kubelet[2813]: W0123 18:58:00.243407 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.243530 kubelet[2813]: E0123 18:58:00.243416 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.244247 kubelet[2813]: E0123 18:58:00.244230 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.244247 kubelet[2813]: W0123 18:58:00.244241 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.244303 kubelet[2813]: E0123 18:58:00.244250 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.244908 kubelet[2813]: E0123 18:58:00.244746 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.244908 kubelet[2813]: W0123 18:58:00.244791 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.244908 kubelet[2813]: E0123 18:58:00.244800 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.246453 kubelet[2813]: E0123 18:58:00.246418 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.246453 kubelet[2813]: W0123 18:58:00.246431 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.246453 kubelet[2813]: E0123 18:58:00.246442 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.247596 kubelet[2813]: E0123 18:58:00.247411 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.247596 kubelet[2813]: W0123 18:58:00.247461 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.247596 kubelet[2813]: E0123 18:58:00.247472 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.248090 kubelet[2813]: E0123 18:58:00.247917 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.248090 kubelet[2813]: W0123 18:58:00.247926 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.248090 kubelet[2813]: E0123 18:58:00.247935 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248119 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.249781 kubelet[2813]: W0123 18:58:00.248127 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248136 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248388 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.249781 kubelet[2813]: W0123 18:58:00.248396 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248469 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248708 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.249781 kubelet[2813]: W0123 18:58:00.248716 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.248723 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.249781 kubelet[2813]: E0123 18:58:00.249086 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.254739 kubelet[2813]: W0123 18:58:00.249094 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.254739 kubelet[2813]: E0123 18:58:00.249102 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.254739 kubelet[2813]: E0123 18:58:00.249800 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.254739 kubelet[2813]: W0123 18:58:00.250924 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.254739 kubelet[2813]: E0123 18:58:00.250936 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.254739 kubelet[2813]: E0123 18:58:00.253721 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.254739 kubelet[2813]: W0123 18:58:00.253730 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.254739 kubelet[2813]: E0123 18:58:00.253740 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.254739 kubelet[2813]: I0123 18:58:00.253769 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b84e892a-422a-478b-8739-e473d68e3bdf-varrun\") pod \"csi-node-driver-ttvc9\" (UID: \"b84e892a-422a-478b-8739-e473d68e3bdf\") " pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:00.256967 kubelet[2813]: E0123 18:58:00.254506 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.256967 kubelet[2813]: W0123 18:58:00.254517 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.256967 kubelet[2813]: E0123 18:58:00.254527 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.256967 kubelet[2813]: I0123 18:58:00.254546 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b84e892a-422a-478b-8739-e473d68e3bdf-registration-dir\") pod \"csi-node-driver-ttvc9\" (UID: \"b84e892a-422a-478b-8739-e473d68e3bdf\") " pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:00.256967 kubelet[2813]: E0123 18:58:00.255965 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.256967 kubelet[2813]: W0123 18:58:00.256496 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.256967 kubelet[2813]: E0123 18:58:00.256510 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.257182 kubelet[2813]: I0123 18:58:00.256576 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b84e892a-422a-478b-8739-e473d68e3bdf-socket-dir\") pod \"csi-node-driver-ttvc9\" (UID: \"b84e892a-422a-478b-8739-e473d68e3bdf\") " pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:00.258172 kubelet[2813]: E0123 18:58:00.257741 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.258172 kubelet[2813]: W0123 18:58:00.258018 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.258172 kubelet[2813]: E0123 18:58:00.258030 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.259919 kubelet[2813]: E0123 18:58:00.259794 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.259994 kubelet[2813]: W0123 18:58:00.259981 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.260049 kubelet[2813]: E0123 18:58:00.260038 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.260340 kubelet[2813]: E0123 18:58:00.260329 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.260403 kubelet[2813]: W0123 18:58:00.260392 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.260455 kubelet[2813]: E0123 18:58:00.260445 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.261146 kubelet[2813]: E0123 18:58:00.261076 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.261146 kubelet[2813]: W0123 18:58:00.261088 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.261146 kubelet[2813]: E0123 18:58:00.261100 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.261943 kubelet[2813]: E0123 18:58:00.261786 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.261990 kubelet[2813]: W0123 18:58:00.261950 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.261990 kubelet[2813]: E0123 18:58:00.261962 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.263004 kubelet[2813]: E0123 18:58:00.262756 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.263004 kubelet[2813]: W0123 18:58:00.262795 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.263004 kubelet[2813]: E0123 18:58:00.262943 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.263004 kubelet[2813]: I0123 18:58:00.262967 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b84e892a-422a-478b-8739-e473d68e3bdf-kubelet-dir\") pod \"csi-node-driver-ttvc9\" (UID: \"b84e892a-422a-478b-8739-e473d68e3bdf\") " pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:00.264000 kubelet[2813]: E0123 18:58:00.263788 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.264000 kubelet[2813]: W0123 18:58:00.263911 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.264000 kubelet[2813]: E0123 18:58:00.263922 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.264599 kubelet[2813]: E0123 18:58:00.264526 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.264967 kubelet[2813]: W0123 18:58:00.264750 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.264967 kubelet[2813]: E0123 18:58:00.264910 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.265742 kubelet[2813]: E0123 18:58:00.265702 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.265742 kubelet[2813]: W0123 18:58:00.265711 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.265742 kubelet[2813]: E0123 18:58:00.265720 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.266717 kubelet[2813]: E0123 18:58:00.266567 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.266767 kubelet[2813]: W0123 18:58:00.266746 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.266767 kubelet[2813]: E0123 18:58:00.266756 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.267121 kubelet[2813]: I0123 18:58:00.266779 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shh6f\" (UniqueName: \"kubernetes.io/projected/b84e892a-422a-478b-8739-e473d68e3bdf-kube-api-access-shh6f\") pod \"csi-node-driver-ttvc9\" (UID: \"b84e892a-422a-478b-8739-e473d68e3bdf\") " pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:00.268144 kubelet[2813]: E0123 18:58:00.267953 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.268144 kubelet[2813]: W0123 18:58:00.268118 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.268144 kubelet[2813]: E0123 18:58:00.268128 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.268943 kubelet[2813]: E0123 18:58:00.268757 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.268943 kubelet[2813]: W0123 18:58:00.268769 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.268943 kubelet[2813]: E0123 18:58:00.268778 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.272000 audit: BPF prog-id=151 op=LOAD Jan 23 18:58:00.273000 audit: BPF prog-id=152 op=LOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=152 op=UNLOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=153 op=LOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=154 op=LOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=154 op=UNLOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=153 op=UNLOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.273000 audit: BPF prog-id=155 op=LOAD Jan 23 18:58:00.273000 audit[3273]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3254 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361303266376130643066666364613930333962663264356663376266 Jan 23 18:58:00.304557 kubelet[2813]: E0123 18:58:00.303988 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:00.305092 containerd[1599]: time="2026-01-23T18:58:00.305056200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52f5c,Uid:c36af473-ff65-4dab-8275-c153be5f5297,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:00.348873 containerd[1599]: time="2026-01-23T18:58:00.348487968Z" level=info msg="connecting to shim 1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd" address="unix:///run/containerd/s/601ecdff29971b092fd4f9aec9e00ffd9eaa9d2b4b00543f81ab6cdd54aff657" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:00.361963 containerd[1599]: time="2026-01-23T18:58:00.361800460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbfc7b8b7-b8bb4,Uid:a255e702-bae4-49a5-9196-45b8394a2e8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d\"" Jan 23 18:58:00.363668 kubelet[2813]: E0123 18:58:00.363562 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:00.369784 kubelet[2813]: E0123 18:58:00.368061 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.369784 kubelet[2813]: W0123 18:58:00.369577 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.369784 kubelet[2813]: E0123 18:58:00.369594 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.370059 containerd[1599]: time="2026-01-23T18:58:00.370000698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:58:00.370913 kubelet[2813]: E0123 18:58:00.370774 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.370984 kubelet[2813]: W0123 18:58:00.370918 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.370984 kubelet[2813]: E0123 18:58:00.370931 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.371948 kubelet[2813]: E0123 18:58:00.371772 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.372099 kubelet[2813]: W0123 18:58:00.372026 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.372099 kubelet[2813]: E0123 18:58:00.372071 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.374181 kubelet[2813]: E0123 18:58:00.374072 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.374181 kubelet[2813]: W0123 18:58:00.374085 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.374181 kubelet[2813]: E0123 18:58:00.374096 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.374567 kubelet[2813]: E0123 18:58:00.374539 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.374567 kubelet[2813]: W0123 18:58:00.374549 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.374567 kubelet[2813]: E0123 18:58:00.374559 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.377883 kubelet[2813]: E0123 18:58:00.376974 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.377883 kubelet[2813]: W0123 18:58:00.376986 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.377883 kubelet[2813]: E0123 18:58:00.376995 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.377883 kubelet[2813]: E0123 18:58:00.377526 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.377883 kubelet[2813]: W0123 18:58:00.377534 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.377883 kubelet[2813]: E0123 18:58:00.377543 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.378321 kubelet[2813]: E0123 18:58:00.378225 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.378321 kubelet[2813]: W0123 18:58:00.378270 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.378321 kubelet[2813]: E0123 18:58:00.378281 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.379176 kubelet[2813]: E0123 18:58:00.379093 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.379176 kubelet[2813]: W0123 18:58:00.379139 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.379176 kubelet[2813]: E0123 18:58:00.379149 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.379726 kubelet[2813]: E0123 18:58:00.379656 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.379726 kubelet[2813]: W0123 18:58:00.379697 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.379726 kubelet[2813]: E0123 18:58:00.379707 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.380265 kubelet[2813]: E0123 18:58:00.380180 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.380265 kubelet[2813]: W0123 18:58:00.380223 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.380265 kubelet[2813]: E0123 18:58:00.380232 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.381421 kubelet[2813]: E0123 18:58:00.381293 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.381421 kubelet[2813]: W0123 18:58:00.381337 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.381421 kubelet[2813]: E0123 18:58:00.381347 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.382196 kubelet[2813]: E0123 18:58:00.382110 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.382196 kubelet[2813]: W0123 18:58:00.382121 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.382196 kubelet[2813]: E0123 18:58:00.382130 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.382723 kubelet[2813]: E0123 18:58:00.382602 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.382723 kubelet[2813]: W0123 18:58:00.382665 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.382723 kubelet[2813]: E0123 18:58:00.382675 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.383407 kubelet[2813]: E0123 18:58:00.383256 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.383526 kubelet[2813]: W0123 18:58:00.383412 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.383526 kubelet[2813]: E0123 18:58:00.383423 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.385715 kubelet[2813]: E0123 18:58:00.385599 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.385715 kubelet[2813]: W0123 18:58:00.385609 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.385715 kubelet[2813]: E0123 18:58:00.385659 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.386877 kubelet[2813]: E0123 18:58:00.386704 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.387195 kubelet[2813]: W0123 18:58:00.387078 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.387243 kubelet[2813]: E0123 18:58:00.387201 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.388295 kubelet[2813]: E0123 18:58:00.388112 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.388295 kubelet[2813]: W0123 18:58:00.388276 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.388549 kubelet[2813]: E0123 18:58:00.388415 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.390342 kubelet[2813]: E0123 18:58:00.390179 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.390342 kubelet[2813]: W0123 18:58:00.390228 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.390342 kubelet[2813]: E0123 18:58:00.390238 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.391013 kubelet[2813]: E0123 18:58:00.390797 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.391013 kubelet[2813]: W0123 18:58:00.390944 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.391351 kubelet[2813]: E0123 18:58:00.391210 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.391999 kubelet[2813]: E0123 18:58:00.391943 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.392048 kubelet[2813]: W0123 18:58:00.392000 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.392048 kubelet[2813]: E0123 18:58:00.392027 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.392963 kubelet[2813]: E0123 18:58:00.392463 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.392963 kubelet[2813]: W0123 18:58:00.392477 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.392963 kubelet[2813]: E0123 18:58:00.392489 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.393306 kubelet[2813]: E0123 18:58:00.393270 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.393306 kubelet[2813]: W0123 18:58:00.393283 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.393306 kubelet[2813]: E0123 18:58:00.393293 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.394045 kubelet[2813]: E0123 18:58:00.394009 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.394045 kubelet[2813]: W0123 18:58:00.394021 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.394045 kubelet[2813]: E0123 18:58:00.394032 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.394537 kubelet[2813]: E0123 18:58:00.394525 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.394746 kubelet[2813]: W0123 18:58:00.394592 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.394746 kubelet[2813]: E0123 18:58:00.394606 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.406419 systemd[1]: Started cri-containerd-1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd.scope - libcontainer container 1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd. Jan 23 18:58:00.413108 kubelet[2813]: E0123 18:58:00.413092 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:00.413192 kubelet[2813]: W0123 18:58:00.413180 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:00.413243 kubelet[2813]: E0123 18:58:00.413233 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:00.423000 audit: BPF prog-id=156 op=LOAD Jan 23 18:58:00.423000 audit: BPF prog-id=157 op=LOAD Jan 23 18:58:00.423000 audit[3364]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.423000 audit: BPF prog-id=157 op=UNLOAD Jan 23 18:58:00.423000 audit[3364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.424000 audit: BPF prog-id=158 op=LOAD Jan 23 18:58:00.424000 audit[3364]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.424000 audit: BPF prog-id=159 op=LOAD Jan 23 18:58:00.424000 audit[3364]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.424000 audit: BPF prog-id=159 op=UNLOAD Jan 23 18:58:00.424000 audit[3364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.424000 audit: BPF prog-id=158 op=UNLOAD Jan 23 18:58:00.424000 audit[3364]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.424000 audit: BPF prog-id=160 op=LOAD Jan 23 18:58:00.424000 audit[3364]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3349 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162613735333631316565366330636235326633316335623638306366 Jan 23 18:58:00.460413 containerd[1599]: time="2026-01-23T18:58:00.460193387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52f5c,Uid:c36af473-ff65-4dab-8275-c153be5f5297,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\"" Jan 23 18:58:00.461587 kubelet[2813]: E0123 18:58:00.461562 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:00.794000 audit[3414]: NETFILTER_CFG table=filter:113 family=2 entries=22 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:00.798143 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 23 18:58:00.798221 kernel: audit: type=1325 audit(1769194680.794:539): table=filter:113 family=2 entries=22 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:00.794000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff10dbefe0 a2=0 a3=7fff10dbefcc items=0 ppid=2952 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.824186 kernel: audit: type=1300 audit(1769194680.794:539): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff10dbefe0 a2=0 a3=7fff10dbefcc items=0 ppid=2952 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.824239 kernel: audit: type=1327 audit(1769194680.794:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:00.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:00.825000 audit[3414]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:00.841137 kernel: audit: type=1325 audit(1769194680.825:540): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:00.841248 kernel: audit: type=1300 audit(1769194680.825:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff10dbefe0 a2=0 a3=0 items=0 ppid=2952 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.825000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff10dbefe0 a2=0 a3=0 items=0 ppid=2952 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:00.857957 kernel: audit: type=1327 audit(1769194680.825:540): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:00.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:01.572962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407584793.mount: Deactivated successfully. Jan 23 18:58:02.430099 kubelet[2813]: E0123 18:58:02.430002 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:03.433518 containerd[1599]: time="2026-01-23T18:58:03.433373734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:03.436952 containerd[1599]: time="2026-01-23T18:58:03.435378894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 23 18:58:03.438255 containerd[1599]: time="2026-01-23T18:58:03.438014603Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:03.452765 containerd[1599]: time="2026-01-23T18:58:03.452704426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:03.453368 containerd[1599]: time="2026-01-23T18:58:03.453272221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.083244002s" Jan 23 18:58:03.453368 containerd[1599]: time="2026-01-23T18:58:03.453362408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:58:03.454899 containerd[1599]: time="2026-01-23T18:58:03.454546621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:58:03.475659 containerd[1599]: time="2026-01-23T18:58:03.475520454Z" level=info msg="CreateContainer within sandbox \"3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:58:03.486994 containerd[1599]: time="2026-01-23T18:58:03.486476675Z" level=info msg="Container 8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:03.498219 containerd[1599]: time="2026-01-23T18:58:03.498010282Z" level=info msg="CreateContainer within sandbox \"3a02f7a0d0ffcda9039bf2d5fc7bfaeead6d35f06d5489349d23ff5c2748943d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98\"" Jan 23 18:58:03.499789 containerd[1599]: time="2026-01-23T18:58:03.499691674Z" level=info msg="StartContainer for \"8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98\"" Jan 23 18:58:03.501540 containerd[1599]: time="2026-01-23T18:58:03.501253093Z" level=info msg="connecting to shim 8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98" address="unix:///run/containerd/s/7362ed72fe559329be8f3e64cc4ff841c12551332bdcd0e23c064ca665243618" protocol=ttrpc version=3 Jan 23 18:58:03.535046 systemd[1]: Started cri-containerd-8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98.scope - libcontainer container 8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98. Jan 23 18:58:03.566000 audit: BPF prog-id=161 op=LOAD Jan 23 18:58:03.568000 audit: BPF prog-id=162 op=LOAD Jan 23 18:58:03.577723 kernel: audit: type=1334 audit(1769194683.566:541): prog-id=161 op=LOAD Jan 23 18:58:03.577793 kernel: audit: type=1334 audit(1769194683.568:542): prog-id=162 op=LOAD Jan 23 18:58:03.577909 kernel: audit: type=1300 audit(1769194683.568:542): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.614999 kernel: audit: type=1327 audit(1769194683.568:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=162 op=UNLOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=163 op=LOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=164 op=LOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=164 op=UNLOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=163 op=UNLOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.568000 audit: BPF prog-id=165 op=LOAD Jan 23 18:58:03.568000 audit[3425]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3254 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:03.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383661623838343366333565623537616136386539613030333531 Jan 23 18:58:03.647160 containerd[1599]: time="2026-01-23T18:58:03.646927781Z" level=info msg="StartContainer for \"8186ab8843f35eb57aa68e9a003512572d43e47a93b9b85281bfdb299922fc98\" returns successfully" Jan 23 18:58:04.024134 containerd[1599]: time="2026-01-23T18:58:04.024037407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.025352 containerd[1599]: time="2026-01-23T18:58:04.025133147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:04.026923 containerd[1599]: time="2026-01-23T18:58:04.026753094Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.029923 containerd[1599]: time="2026-01-23T18:58:04.029774478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:04.030339 containerd[1599]: time="2026-01-23T18:58:04.030270637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 575.637696ms" Jan 23 18:58:04.030339 containerd[1599]: time="2026-01-23T18:58:04.030337020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:58:04.037139 containerd[1599]: time="2026-01-23T18:58:04.037023771Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:58:04.054200 containerd[1599]: time="2026-01-23T18:58:04.054123917Z" level=info msg="Container d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:04.067749 containerd[1599]: time="2026-01-23T18:58:04.067664536Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e\"" Jan 23 18:58:04.069352 containerd[1599]: time="2026-01-23T18:58:04.068216155Z" level=info msg="StartContainer for \"d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e\"" Jan 23 18:58:04.071068 containerd[1599]: time="2026-01-23T18:58:04.071019876Z" level=info msg="connecting to shim d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e" address="unix:///run/containerd/s/601ecdff29971b092fd4f9aec9e00ffd9eaa9d2b4b00543f81ab6cdd54aff657" protocol=ttrpc version=3 Jan 23 18:58:04.105124 systemd[1]: Started cri-containerd-d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e.scope - libcontainer container d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e. Jan 23 18:58:04.193000 audit: BPF prog-id=166 op=LOAD Jan 23 18:58:04.193000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3349 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:04.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432623662633631623939323864373162643938666539636238626635 Jan 23 18:58:04.193000 audit: BPF prog-id=167 op=LOAD Jan 23 18:58:04.193000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3349 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:04.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432623662633631623939323864373162643938666539636238626635 Jan 23 18:58:04.193000 audit: BPF prog-id=167 op=UNLOAD Jan 23 18:58:04.193000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:04.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432623662633631623939323864373162643938666539636238626635 Jan 23 18:58:04.193000 audit: BPF prog-id=166 op=UNLOAD Jan 23 18:58:04.193000 audit[3465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:04.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432623662633631623939323864373162643938666539636238626635 Jan 23 18:58:04.193000 audit: BPF prog-id=168 op=LOAD Jan 23 18:58:04.193000 audit[3465]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3349 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:04.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432623662633631623939323864373162643938666539636238626635 Jan 23 18:58:04.232929 containerd[1599]: time="2026-01-23T18:58:04.232492275Z" level=info msg="StartContainer for \"d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e\" returns successfully" Jan 23 18:58:04.251683 systemd[1]: cri-containerd-d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e.scope: Deactivated successfully. Jan 23 18:58:04.255000 audit: BPF prog-id=168 op=UNLOAD Jan 23 18:58:04.257331 containerd[1599]: time="2026-01-23T18:58:04.257130955Z" level=info msg="received container exit event container_id:\"d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e\" id:\"d2b6bc61b9928d71bd98fe9cb8bf5c60acf0afd76c9a7dc8ad477bcf1580a23e\" pid:3478 exited_at:{seconds:1769194684 nanos:256228595}" Jan 23 18:58:04.429420 kubelet[2813]: E0123 18:58:04.429256 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:04.556861 kubelet[2813]: E0123 18:58:04.556668 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:04.558285 containerd[1599]: time="2026-01-23T18:58:04.558223395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:58:04.562726 kubelet[2813]: E0123 18:58:04.562625 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:04.589342 kubelet[2813]: I0123 18:58:04.588417 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dbfc7b8b7-b8bb4" podStartSLOduration=2.499712175 podStartE2EDuration="5.587419582s" podCreationTimestamp="2026-01-23 18:57:59 +0000 UTC" firstStartedPulling="2026-01-23 18:58:00.366689631 +0000 UTC m=+20.100079793" lastFinishedPulling="2026-01-23 18:58:03.454397039 +0000 UTC m=+23.187787200" observedRunningTime="2026-01-23 18:58:04.586630521 +0000 UTC m=+24.320020682" watchObservedRunningTime="2026-01-23 18:58:04.587419582 +0000 UTC m=+24.320809743" Jan 23 18:58:05.564665 kubelet[2813]: I0123 18:58:05.564446 2813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:58:05.565422 kubelet[2813]: E0123 18:58:05.565324 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:06.430080 kubelet[2813]: E0123 18:58:06.430030 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:06.594392 containerd[1599]: time="2026-01-23T18:58:06.594193798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.595979 containerd[1599]: time="2026-01-23T18:58:06.595872791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 18:58:06.597116 containerd[1599]: time="2026-01-23T18:58:06.597042468Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.599798 containerd[1599]: time="2026-01-23T18:58:06.599721156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.600197 containerd[1599]: time="2026-01-23T18:58:06.600109051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.041817078s" Jan 23 18:58:06.600197 containerd[1599]: time="2026-01-23T18:58:06.600190332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:58:06.605963 containerd[1599]: time="2026-01-23T18:58:06.605932246Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:58:06.616723 containerd[1599]: time="2026-01-23T18:58:06.616663982Z" level=info msg="Container 2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:06.627356 containerd[1599]: time="2026-01-23T18:58:06.627220250Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5\"" Jan 23 18:58:06.629032 containerd[1599]: time="2026-01-23T18:58:06.628634368Z" level=info msg="StartContainer for \"2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5\"" Jan 23 18:58:06.631425 containerd[1599]: time="2026-01-23T18:58:06.631356784Z" level=info msg="connecting to shim 2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5" address="unix:///run/containerd/s/601ecdff29971b092fd4f9aec9e00ffd9eaa9d2b4b00543f81ab6cdd54aff657" protocol=ttrpc version=3 Jan 23 18:58:06.674060 systemd[1]: Started cri-containerd-2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5.scope - libcontainer container 2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5. Jan 23 18:58:06.755000 audit: BPF prog-id=169 op=LOAD Jan 23 18:58:06.760290 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 23 18:58:06.760358 kernel: audit: type=1334 audit(1769194686.755:555): prog-id=169 op=LOAD Jan 23 18:58:06.755000 audit[3527]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.780233 kernel: audit: type=1300 audit(1769194686.755:555): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.795740 kernel: audit: type=1327 audit(1769194686.755:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.755000 audit: BPF prog-id=170 op=LOAD Jan 23 18:58:06.755000 audit[3527]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.815923 kernel: audit: type=1334 audit(1769194686.755:556): prog-id=170 op=LOAD Jan 23 18:58:06.815976 kernel: audit: type=1300 audit(1769194686.755:556): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.816006 kernel: audit: type=1327 audit(1769194686.755:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.755000 audit: BPF prog-id=170 op=UNLOAD Jan 23 18:58:06.835385 kernel: audit: type=1334 audit(1769194686.755:557): prog-id=170 op=UNLOAD Jan 23 18:58:06.835471 kernel: audit: type=1300 audit(1769194686.755:557): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.755000 audit[3527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.842919 containerd[1599]: time="2026-01-23T18:58:06.842764208Z" level=info msg="StartContainer for \"2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5\" returns successfully" Jan 23 18:58:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.866408 kernel: audit: type=1327 audit(1769194686.755:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.870650 kernel: audit: type=1334 audit(1769194686.755:558): prog-id=169 op=UNLOAD Jan 23 18:58:06.755000 audit: BPF prog-id=169 op=UNLOAD Jan 23 18:58:06.755000 audit[3527]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:06.755000 audit: BPF prog-id=171 op=LOAD Jan 23 18:58:06.755000 audit[3527]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3349 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262353430656131313136363766663164353536643332356466383436 Jan 23 18:58:07.549737 systemd[1]: cri-containerd-2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5.scope: Deactivated successfully. Jan 23 18:58:07.550310 systemd[1]: cri-containerd-2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5.scope: Consumed 803ms CPU time, 174.9M memory peak, 3.9M read from disk, 171.3M written to disk. Jan 23 18:58:07.553981 containerd[1599]: time="2026-01-23T18:58:07.553778584Z" level=info msg="received container exit event container_id:\"2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5\" id:\"2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5\" pid:3540 exited_at:{seconds:1769194687 nanos:552359973}" Jan 23 18:58:07.554000 audit: BPF prog-id=171 op=UNLOAD Jan 23 18:58:07.579468 kubelet[2813]: E0123 18:58:07.579271 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:07.597206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b540ea111667ff1d556d325df8462e1618f074c145e613df3b07a401d5660a5-rootfs.mount: Deactivated successfully. Jan 23 18:58:07.620464 kubelet[2813]: I0123 18:58:07.620004 2813 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:58:07.696071 systemd[1]: Created slice kubepods-besteffort-pod08e0f712_602d_4383_b896_c9306db83d93.slice - libcontainer container kubepods-besteffort-pod08e0f712_602d_4383_b896_c9306db83d93.slice. Jan 23 18:58:07.699601 kubelet[2813]: E0123 18:58:07.699500 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:07.706069 systemd[1]: Created slice kubepods-besteffort-podcc04cd5b_edb0_48f3_88f2_e7b09f3ee672.slice - libcontainer container kubepods-besteffort-podcc04cd5b_edb0_48f3_88f2_e7b09f3ee672.slice. Jan 23 18:58:07.722793 systemd[1]: Created slice kubepods-burstable-podb93fb228_d6f3_4cd2_be25_e621a5f856a3.slice - libcontainer container kubepods-burstable-podb93fb228_d6f3_4cd2_be25_e621a5f856a3.slice. Jan 23 18:58:07.734993 systemd[1]: Created slice kubepods-besteffort-pod3d13162b_7811_42a8_ba7e_74957a3844c9.slice - libcontainer container kubepods-besteffort-pod3d13162b_7811_42a8_ba7e_74957a3844c9.slice. Jan 23 18:58:07.743384 systemd[1]: Created slice kubepods-besteffort-pod16aed12c_3493_44d9_ad22_ad901db963e9.slice - libcontainer container kubepods-besteffort-pod16aed12c_3493_44d9_ad22_ad901db963e9.slice. Jan 23 18:58:07.753063 kubelet[2813]: I0123 18:58:07.752748 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhd8x\" (UniqueName: \"kubernetes.io/projected/3d13162b-7811-42a8-ba7e-74957a3844c9-kube-api-access-qhd8x\") pod \"calico-apiserver-6b479c8c46-jn62z\" (UID: \"3d13162b-7811-42a8-ba7e-74957a3844c9\") " pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" Jan 23 18:58:07.756299 kubelet[2813]: I0123 18:58:07.754618 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e0f712-602d-4383-b896-c9306db83d93-whisker-backend-key-pair\") pod \"whisker-84f9cfd68c-p6v2q\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " pod="calico-system/whisker-84f9cfd68c-p6v2q" Jan 23 18:58:07.756299 kubelet[2813]: I0123 18:58:07.755155 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47vd\" (UniqueName: \"kubernetes.io/projected/cc04cd5b-edb0-48f3-88f2-e7b09f3ee672-kube-api-access-h47vd\") pod \"calico-kube-controllers-d5fc65b5f-cz9b8\" (UID: \"cc04cd5b-edb0-48f3-88f2-e7b09f3ee672\") " pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" Jan 23 18:58:07.756299 kubelet[2813]: I0123 18:58:07.755174 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16aed12c-3493-44d9-ad22-ad901db963e9-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-lwrn2\" (UID: \"16aed12c-3493-44d9-ad22-ad901db963e9\") " pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:07.756299 kubelet[2813]: I0123 18:58:07.755188 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e954cb68-155f-4961-939e-caf1b1372055-config-volume\") pod \"coredns-66bc5c9577-j5l7h\" (UID: \"e954cb68-155f-4961-939e-caf1b1372055\") " pod="kube-system/coredns-66bc5c9577-j5l7h" Jan 23 18:58:07.756299 kubelet[2813]: I0123 18:58:07.755206 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc04cd5b-edb0-48f3-88f2-e7b09f3ee672-tigera-ca-bundle\") pod \"calico-kube-controllers-d5fc65b5f-cz9b8\" (UID: \"cc04cd5b-edb0-48f3-88f2-e7b09f3ee672\") " pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" Jan 23 18:58:07.758088 kubelet[2813]: I0123 18:58:07.755230 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6zz\" (UniqueName: \"kubernetes.io/projected/16aed12c-3493-44d9-ad22-ad901db963e9-kube-api-access-hz6zz\") pod \"goldmane-7c778bb748-lwrn2\" (UID: \"16aed12c-3493-44d9-ad22-ad901db963e9\") " pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:07.758088 kubelet[2813]: I0123 18:58:07.755246 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khplz\" (UniqueName: \"kubernetes.io/projected/e954cb68-155f-4961-939e-caf1b1372055-kube-api-access-khplz\") pod \"coredns-66bc5c9577-j5l7h\" (UID: \"e954cb68-155f-4961-939e-caf1b1372055\") " pod="kube-system/coredns-66bc5c9577-j5l7h" Jan 23 18:58:07.758088 kubelet[2813]: I0123 18:58:07.755259 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93fb228-d6f3-4cd2-be25-e621a5f856a3-config-volume\") pod \"coredns-66bc5c9577-c5w5n\" (UID: \"b93fb228-d6f3-4cd2-be25-e621a5f856a3\") " pod="kube-system/coredns-66bc5c9577-c5w5n" Jan 23 18:58:07.758088 kubelet[2813]: I0123 18:58:07.755273 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0f712-602d-4383-b896-c9306db83d93-whisker-ca-bundle\") pod \"whisker-84f9cfd68c-p6v2q\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " pod="calico-system/whisker-84f9cfd68c-p6v2q" Jan 23 18:58:07.758088 kubelet[2813]: I0123 18:58:07.755287 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmhs\" (UniqueName: \"kubernetes.io/projected/08e0f712-602d-4383-b896-c9306db83d93-kube-api-access-pdmhs\") pod \"whisker-84f9cfd68c-p6v2q\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " pod="calico-system/whisker-84f9cfd68c-p6v2q" Jan 23 18:58:07.758288 kubelet[2813]: I0123 18:58:07.755299 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d13162b-7811-42a8-ba7e-74957a3844c9-calico-apiserver-certs\") pod \"calico-apiserver-6b479c8c46-jn62z\" (UID: \"3d13162b-7811-42a8-ba7e-74957a3844c9\") " pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" Jan 23 18:58:07.758288 kubelet[2813]: I0123 18:58:07.755312 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aed12c-3493-44d9-ad22-ad901db963e9-config\") pod \"goldmane-7c778bb748-lwrn2\" (UID: \"16aed12c-3493-44d9-ad22-ad901db963e9\") " pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:07.758288 kubelet[2813]: I0123 18:58:07.755324 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lnxl\" (UniqueName: \"kubernetes.io/projected/b93fb228-d6f3-4cd2-be25-e621a5f856a3-kube-api-access-4lnxl\") pod \"coredns-66bc5c9577-c5w5n\" (UID: \"b93fb228-d6f3-4cd2-be25-e621a5f856a3\") " pod="kube-system/coredns-66bc5c9577-c5w5n" Jan 23 18:58:07.758288 kubelet[2813]: I0123 18:58:07.755347 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/16aed12c-3493-44d9-ad22-ad901db963e9-goldmane-key-pair\") pod \"goldmane-7c778bb748-lwrn2\" (UID: \"16aed12c-3493-44d9-ad22-ad901db963e9\") " pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:07.761193 systemd[1]: Created slice kubepods-burstable-pode954cb68_155f_4961_939e_caf1b1372055.slice - libcontainer container kubepods-burstable-pode954cb68_155f_4961_939e_caf1b1372055.slice. Jan 23 18:58:07.762000 audit[3573]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:07.762000 audit[3573]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe91665670 a2=0 a3=7ffe9166565c items=0 ppid=2952 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:07.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:07.768000 audit[3573]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=3573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:07.768000 audit[3573]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe91665670 a2=0 a3=7ffe9166565c items=0 ppid=2952 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:07.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:07.776440 systemd[1]: Created slice kubepods-besteffort-poda83d5fe0_7502_4b47_a69c_d746b0e6550b.slice - libcontainer container kubepods-besteffort-poda83d5fe0_7502_4b47_a69c_d746b0e6550b.slice. Jan 23 18:58:07.857746 kubelet[2813]: I0123 18:58:07.855995 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qldk\" (UniqueName: \"kubernetes.io/projected/a83d5fe0-7502-4b47-a69c-d746b0e6550b-kube-api-access-2qldk\") pod \"calico-apiserver-6b479c8c46-2gfjg\" (UID: \"a83d5fe0-7502-4b47-a69c-d746b0e6550b\") " pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" Jan 23 18:58:07.857746 kubelet[2813]: I0123 18:58:07.856660 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a83d5fe0-7502-4b47-a69c-d746b0e6550b-calico-apiserver-certs\") pod \"calico-apiserver-6b479c8c46-2gfjg\" (UID: \"a83d5fe0-7502-4b47-a69c-d746b0e6550b\") " pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" Jan 23 18:58:08.005889 containerd[1599]: time="2026-01-23T18:58:08.005676609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f9cfd68c-p6v2q,Uid:08e0f712-602d-4383-b896-c9306db83d93,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:08.021770 containerd[1599]: time="2026-01-23T18:58:08.021699194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5fc65b5f-cz9b8,Uid:cc04cd5b-edb0-48f3-88f2-e7b09f3ee672,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:08.033089 kubelet[2813]: E0123 18:58:08.032969 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:08.036414 containerd[1599]: time="2026-01-23T18:58:08.036213820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c5w5n,Uid:b93fb228-d6f3-4cd2-be25-e621a5f856a3,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:08.046194 containerd[1599]: time="2026-01-23T18:58:08.045785690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-jn62z,Uid:3d13162b-7811-42a8-ba7e-74957a3844c9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:08.073344 kubelet[2813]: E0123 18:58:08.073135 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:08.080756 containerd[1599]: time="2026-01-23T18:58:08.080128319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j5l7h,Uid:e954cb68-155f-4961-939e-caf1b1372055,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:08.080756 containerd[1599]: time="2026-01-23T18:58:08.080458492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lwrn2,Uid:16aed12c-3493-44d9-ad22-ad901db963e9,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:08.101155 containerd[1599]: time="2026-01-23T18:58:08.101018346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-2gfjg,Uid:a83d5fe0-7502-4b47-a69c-d746b0e6550b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:08.230677 containerd[1599]: time="2026-01-23T18:58:08.230060950Z" level=error msg="Failed to destroy network for sandbox \"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.238181 containerd[1599]: time="2026-01-23T18:58:08.238152287Z" level=error msg="Failed to destroy network for sandbox \"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.239105 containerd[1599]: time="2026-01-23T18:58:08.238993427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c5w5n,Uid:b93fb228-d6f3-4cd2-be25-e621a5f856a3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.239920 kubelet[2813]: E0123 18:58:08.239697 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.239920 kubelet[2813]: E0123 18:58:08.239777 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c5w5n" Jan 23 18:58:08.240337 kubelet[2813]: E0123 18:58:08.240122 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c5w5n" Jan 23 18:58:08.240445 kubelet[2813]: E0123 18:58:08.240421 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-c5w5n_kube-system(b93fb228-d6f3-4cd2-be25-e621a5f856a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-c5w5n_kube-system(b93fb228-d6f3-4cd2-be25-e621a5f856a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c141bd0526e95cf26dbc7260188b872119a836b73d41cb2bd016a5914e58da21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-c5w5n" podUID="b93fb228-d6f3-4cd2-be25-e621a5f856a3" Jan 23 18:58:08.249448 containerd[1599]: time="2026-01-23T18:58:08.249183863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5fc65b5f-cz9b8,Uid:cc04cd5b-edb0-48f3-88f2-e7b09f3ee672,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.250723 kubelet[2813]: E0123 18:58:08.250266 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.250723 kubelet[2813]: E0123 18:58:08.250304 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" Jan 23 18:58:08.250723 kubelet[2813]: E0123 18:58:08.250321 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" Jan 23 18:58:08.250942 kubelet[2813]: E0123 18:58:08.250359 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d5fc65b5f-cz9b8_calico-system(cc04cd5b-edb0-48f3-88f2-e7b09f3ee672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d5fc65b5f-cz9b8_calico-system(cc04cd5b-edb0-48f3-88f2-e7b09f3ee672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aa0d5faa100fa05abda163c8a88774e47d49cbae24873bb04305184c17dbf3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:08.266456 containerd[1599]: time="2026-01-23T18:58:08.266346871Z" level=error msg="Failed to destroy network for sandbox \"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.272710 containerd[1599]: time="2026-01-23T18:58:08.271938052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f9cfd68c-p6v2q,Uid:08e0f712-602d-4383-b896-c9306db83d93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.273313 kubelet[2813]: E0123 18:58:08.272276 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.273313 kubelet[2813]: E0123 18:58:08.272320 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84f9cfd68c-p6v2q" Jan 23 18:58:08.273313 kubelet[2813]: E0123 18:58:08.272338 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84f9cfd68c-p6v2q" Jan 23 18:58:08.273761 kubelet[2813]: E0123 18:58:08.272381 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84f9cfd68c-p6v2q_calico-system(08e0f712-602d-4383-b896-c9306db83d93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84f9cfd68c-p6v2q_calico-system(08e0f712-602d-4383-b896-c9306db83d93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ec6cb1f6353f8ef84fc105e70b9a9f705da047663d581a9631677164418723\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84f9cfd68c-p6v2q" podUID="08e0f712-602d-4383-b896-c9306db83d93" Jan 23 18:58:08.296354 containerd[1599]: time="2026-01-23T18:58:08.296214772Z" level=error msg="Failed to destroy network for sandbox \"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.303003 containerd[1599]: time="2026-01-23T18:58:08.302732473Z" level=error msg="Failed to destroy network for sandbox \"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.306763 containerd[1599]: time="2026-01-23T18:58:08.306585637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-2gfjg,Uid:a83d5fe0-7502-4b47-a69c-d746b0e6550b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.307123 kubelet[2813]: E0123 18:58:08.306993 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.307169 kubelet[2813]: E0123 18:58:08.307135 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" Jan 23 18:58:08.307169 kubelet[2813]: E0123 18:58:08.307157 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" Jan 23 18:58:08.307224 kubelet[2813]: E0123 18:58:08.307203 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b479c8c46-2gfjg_calico-apiserver(a83d5fe0-7502-4b47-a69c-d746b0e6550b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b479c8c46-2gfjg_calico-apiserver(a83d5fe0-7502-4b47-a69c-d746b0e6550b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"527bfdf14c75eea6ce1047b0b05bcea92e1316903ad9af028e03b77c13403d50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:08.308514 containerd[1599]: time="2026-01-23T18:58:08.308428927Z" level=error msg="Failed to destroy network for sandbox \"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.310760 containerd[1599]: time="2026-01-23T18:58:08.310686480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lwrn2,Uid:16aed12c-3493-44d9-ad22-ad901db963e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.311979 kubelet[2813]: E0123 18:58:08.311924 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.312344 kubelet[2813]: E0123 18:58:08.312003 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:08.312344 kubelet[2813]: E0123 18:58:08.312141 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lwrn2" Jan 23 18:58:08.312407 kubelet[2813]: E0123 18:58:08.312389 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-lwrn2_calico-system(16aed12c-3493-44d9-ad22-ad901db963e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-lwrn2_calico-system(16aed12c-3493-44d9-ad22-ad901db963e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc40663080fa893f34970b19009098b9f53ac8fc6c93e02a7f31903872a3c7e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:08.316633 containerd[1599]: time="2026-01-23T18:58:08.316593850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j5l7h,Uid:e954cb68-155f-4961-939e-caf1b1372055,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.317003 kubelet[2813]: E0123 18:58:08.316935 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.317045 kubelet[2813]: E0123 18:58:08.317028 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j5l7h" Jan 23 18:58:08.317068 kubelet[2813]: E0123 18:58:08.317051 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-j5l7h" Jan 23 18:58:08.317196 kubelet[2813]: E0123 18:58:08.317140 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-j5l7h_kube-system(e954cb68-155f-4961-939e-caf1b1372055)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-j5l7h_kube-system(e954cb68-155f-4961-939e-caf1b1372055)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9b4b439c6d9ce26cf792ae9bd5c890aa25afa8349c7c7306f3301b6260a3748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-j5l7h" podUID="e954cb68-155f-4961-939e-caf1b1372055" Jan 23 18:58:08.326437 containerd[1599]: time="2026-01-23T18:58:08.326339329Z" level=error msg="Failed to destroy network for sandbox \"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.330225 containerd[1599]: time="2026-01-23T18:58:08.330198188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-jn62z,Uid:3d13162b-7811-42a8-ba7e-74957a3844c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.330949 kubelet[2813]: E0123 18:58:08.330799 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.331022 kubelet[2813]: E0123 18:58:08.330955 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" Jan 23 18:58:08.331022 kubelet[2813]: E0123 18:58:08.330974 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" Jan 23 18:58:08.331205 kubelet[2813]: E0123 18:58:08.331107 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b479c8c46-jn62z_calico-apiserver(3d13162b-7811-42a8-ba7e-74957a3844c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b479c8c46-jn62z_calico-apiserver(3d13162b-7811-42a8-ba7e-74957a3844c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c2881ef78b280f9a0f1c3f976cf5ff8027b0016023daefe29b70c6858a231ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:08.441585 systemd[1]: Created slice kubepods-besteffort-podb84e892a_422a_478b_8739_e473d68e3bdf.slice - libcontainer container kubepods-besteffort-podb84e892a_422a_478b_8739_e473d68e3bdf.slice. Jan 23 18:58:08.447478 containerd[1599]: time="2026-01-23T18:58:08.447345194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ttvc9,Uid:b84e892a-422a-478b-8739-e473d68e3bdf,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:08.559492 containerd[1599]: time="2026-01-23T18:58:08.559372417Z" level=error msg="Failed to destroy network for sandbox \"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.564862 containerd[1599]: time="2026-01-23T18:58:08.564653054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ttvc9,Uid:b84e892a-422a-478b-8739-e473d68e3bdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.565307 kubelet[2813]: E0123 18:58:08.565231 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:08.565365 kubelet[2813]: E0123 18:58:08.565332 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:08.565365 kubelet[2813]: E0123 18:58:08.565351 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ttvc9" Jan 23 18:58:08.565426 kubelet[2813]: E0123 18:58:08.565394 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7be6b835a794a2937fca90cd6ee90f107ede2bbab2972912cffb376aeffc224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:08.588737 kubelet[2813]: E0123 18:58:08.588346 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:08.591160 containerd[1599]: time="2026-01-23T18:58:08.590778049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:58:16.451106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849983342.mount: Deactivated successfully. Jan 23 18:58:16.496617 containerd[1599]: time="2026-01-23T18:58:16.496454762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:16.498063 containerd[1599]: time="2026-01-23T18:58:16.497990489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 23 18:58:16.499694 containerd[1599]: time="2026-01-23T18:58:16.499589620Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:16.502006 containerd[1599]: time="2026-01-23T18:58:16.501931204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:16.502784 containerd[1599]: time="2026-01-23T18:58:16.502677321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.911334334s" Jan 23 18:58:16.502784 containerd[1599]: time="2026-01-23T18:58:16.502751600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:58:16.526137 containerd[1599]: time="2026-01-23T18:58:16.525677286Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:58:16.629215 containerd[1599]: time="2026-01-23T18:58:16.628944257Z" level=info msg="Container 4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:16.656438 containerd[1599]: time="2026-01-23T18:58:16.656323208Z" level=info msg="CreateContainer within sandbox \"1ba753611ee6c0cb52f31c5b680cf02f83199d147ff4dc718c39d23b2db00efd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864\"" Jan 23 18:58:16.657974 containerd[1599]: time="2026-01-23T18:58:16.657797753Z" level=info msg="StartContainer for \"4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864\"" Jan 23 18:58:16.659653 containerd[1599]: time="2026-01-23T18:58:16.659617549Z" level=info msg="connecting to shim 4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864" address="unix:///run/containerd/s/601ecdff29971b092fd4f9aec9e00ffd9eaa9d2b4b00543f81ab6cdd54aff657" protocol=ttrpc version=3 Jan 23 18:58:16.704103 systemd[1]: Started cri-containerd-4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864.scope - libcontainer container 4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864. Jan 23 18:58:16.777000 audit: BPF prog-id=172 op=LOAD Jan 23 18:58:16.782557 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 23 18:58:16.782673 kernel: audit: type=1334 audit(1769194696.777:563): prog-id=172 op=LOAD Jan 23 18:58:16.787009 kernel: audit: type=1300 audit(1769194696.777:563): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.777000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.827791 kernel: audit: type=1327 audit(1769194696.777:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.828330 kernel: audit: type=1334 audit(1769194696.777:564): prog-id=173 op=LOAD Jan 23 18:58:16.777000 audit: BPF prog-id=173 op=LOAD Jan 23 18:58:16.833137 kernel: audit: type=1300 audit(1769194696.777:564): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.777000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.872013 containerd[1599]: time="2026-01-23T18:58:16.871982794Z" level=info msg="StartContainer for \"4eebd53f4111df3d5cebb1de07f7bbbfa1893f4d70386405f8e6ace0c0f02864\" returns successfully" Jan 23 18:58:16.875600 kernel: audit: type=1327 audit(1769194696.777:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.875672 kernel: audit: type=1334 audit(1769194696.778:565): prog-id=173 op=UNLOAD Jan 23 18:58:16.778000 audit: BPF prog-id=173 op=UNLOAD Jan 23 18:58:16.880410 kernel: audit: type=1300 audit(1769194696.778:565): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.778000 audit[3849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.778000 audit: BPF prog-id=172 op=UNLOAD Jan 23 18:58:16.918061 kernel: audit: type=1327 audit(1769194696.778:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.918133 kernel: audit: type=1334 audit(1769194696.778:566): prog-id=172 op=UNLOAD Jan 23 18:58:16.778000 audit[3849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:16.778000 audit: BPF prog-id=174 op=LOAD Jan 23 18:58:16.778000 audit[3849]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3349 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:16.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656264353366343131316466336435636562623164653037663762 Jan 23 18:58:17.041336 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:58:17.041433 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:58:17.249877 kubelet[2813]: I0123 18:58:17.249180 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e0f712-602d-4383-b896-c9306db83d93-whisker-backend-key-pair\") pod \"08e0f712-602d-4383-b896-c9306db83d93\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " Jan 23 18:58:17.249877 kubelet[2813]: I0123 18:58:17.249224 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0f712-602d-4383-b896-c9306db83d93-whisker-ca-bundle\") pod \"08e0f712-602d-4383-b896-c9306db83d93\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " Jan 23 18:58:17.249877 kubelet[2813]: I0123 18:58:17.249394 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmhs\" (UniqueName: \"kubernetes.io/projected/08e0f712-602d-4383-b896-c9306db83d93-kube-api-access-pdmhs\") pod \"08e0f712-602d-4383-b896-c9306db83d93\" (UID: \"08e0f712-602d-4383-b896-c9306db83d93\") " Jan 23 18:58:17.249877 kubelet[2813]: I0123 18:58:17.249705 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08e0f712-602d-4383-b896-c9306db83d93-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "08e0f712-602d-4383-b896-c9306db83d93" (UID: "08e0f712-602d-4383-b896-c9306db83d93"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:58:17.255466 kubelet[2813]: I0123 18:58:17.255434 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e0f712-602d-4383-b896-c9306db83d93-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "08e0f712-602d-4383-b896-c9306db83d93" (UID: "08e0f712-602d-4383-b896-c9306db83d93"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:58:17.257225 kubelet[2813]: I0123 18:58:17.257142 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e0f712-602d-4383-b896-c9306db83d93-kube-api-access-pdmhs" (OuterVolumeSpecName: "kube-api-access-pdmhs") pod "08e0f712-602d-4383-b896-c9306db83d93" (UID: "08e0f712-602d-4383-b896-c9306db83d93"). InnerVolumeSpecName "kube-api-access-pdmhs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:58:17.351125 kubelet[2813]: I0123 18:58:17.350959 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08e0f712-602d-4383-b896-c9306db83d93-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 18:58:17.351125 kubelet[2813]: I0123 18:58:17.350992 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0f712-602d-4383-b896-c9306db83d93-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 18:58:17.351125 kubelet[2813]: I0123 18:58:17.351001 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdmhs\" (UniqueName: \"kubernetes.io/projected/08e0f712-602d-4383-b896-c9306db83d93-kube-api-access-pdmhs\") on node \"localhost\" DevicePath \"\"" Jan 23 18:58:17.451903 systemd[1]: var-lib-kubelet-pods-08e0f712\x2d602d\x2d4383\x2db896\x2dc9306db83d93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpdmhs.mount: Deactivated successfully. Jan 23 18:58:17.452097 systemd[1]: var-lib-kubelet-pods-08e0f712\x2d602d\x2d4383\x2db896\x2dc9306db83d93-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:58:17.628372 kubelet[2813]: E0123 18:58:17.628157 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:17.638133 systemd[1]: Removed slice kubepods-besteffort-pod08e0f712_602d_4383_b896_c9306db83d93.slice - libcontainer container kubepods-besteffort-pod08e0f712_602d_4383_b896_c9306db83d93.slice. Jan 23 18:58:17.649894 kubelet[2813]: I0123 18:58:17.649657 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-52f5c" podStartSLOduration=2.607614899 podStartE2EDuration="18.649643337s" podCreationTimestamp="2026-01-23 18:57:59 +0000 UTC" firstStartedPulling="2026-01-23 18:58:00.462407745 +0000 UTC m=+20.195797906" lastFinishedPulling="2026-01-23 18:58:16.504436183 +0000 UTC m=+36.237826344" observedRunningTime="2026-01-23 18:58:17.647207687 +0000 UTC m=+37.380597848" watchObservedRunningTime="2026-01-23 18:58:17.649643337 +0000 UTC m=+37.383033498" Jan 23 18:58:17.717905 systemd[1]: Created slice kubepods-besteffort-pod53b6d9e4_9482_4a0d_8ab8_6e548b15413b.slice - libcontainer container kubepods-besteffort-pod53b6d9e4_9482_4a0d_8ab8_6e548b15413b.slice. Jan 23 18:58:17.754236 kubelet[2813]: I0123 18:58:17.754108 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767gd\" (UniqueName: \"kubernetes.io/projected/53b6d9e4-9482-4a0d-8ab8-6e548b15413b-kube-api-access-767gd\") pod \"whisker-5597745fdc-nd9dw\" (UID: \"53b6d9e4-9482-4a0d-8ab8-6e548b15413b\") " pod="calico-system/whisker-5597745fdc-nd9dw" Jan 23 18:58:17.754236 kubelet[2813]: I0123 18:58:17.754215 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53b6d9e4-9482-4a0d-8ab8-6e548b15413b-whisker-ca-bundle\") pod \"whisker-5597745fdc-nd9dw\" (UID: \"53b6d9e4-9482-4a0d-8ab8-6e548b15413b\") " pod="calico-system/whisker-5597745fdc-nd9dw" Jan 23 18:58:17.754393 kubelet[2813]: I0123 18:58:17.754258 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53b6d9e4-9482-4a0d-8ab8-6e548b15413b-whisker-backend-key-pair\") pod \"whisker-5597745fdc-nd9dw\" (UID: \"53b6d9e4-9482-4a0d-8ab8-6e548b15413b\") " pod="calico-system/whisker-5597745fdc-nd9dw" Jan 23 18:58:18.029783 containerd[1599]: time="2026-01-23T18:58:18.029601117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5597745fdc-nd9dw,Uid:53b6d9e4-9482-4a0d-8ab8-6e548b15413b,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:18.273461 systemd-networkd[1510]: calied633e158f2: Link UP Jan 23 18:58:18.274315 systemd-networkd[1510]: calied633e158f2: Gained carrier Jan 23 18:58:18.292425 containerd[1599]: 2026-01-23 18:58:18.067 [INFO][3945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:18.292425 containerd[1599]: 2026-01-23 18:58:18.096 [INFO][3945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5597745fdc--nd9dw-eth0 whisker-5597745fdc- calico-system 53b6d9e4-9482-4a0d-8ab8-6e548b15413b 909 0 2026-01-23 18:58:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5597745fdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5597745fdc-nd9dw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calied633e158f2 [] [] }} ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-" Jan 23 18:58:18.292425 containerd[1599]: 2026-01-23 18:58:18.096 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.292425 containerd[1599]: 2026-01-23 18:58:18.208 [INFO][3958] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" HandleID="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Workload="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.209 [INFO][3958] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" HandleID="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Workload="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001342a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5597745fdc-nd9dw", "timestamp":"2026-01-23 18:58:18.208280169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.209 [INFO][3958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.209 [INFO][3958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.209 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.219 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" host="localhost" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.227 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.234 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.236 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.239 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:18.292987 containerd[1599]: 2026-01-23 18:58:18.239 [INFO][3958] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" host="localhost" Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.242 [INFO][3958] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.249 [INFO][3958] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" host="localhost" Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.255 [INFO][3958] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" host="localhost" Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.255 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" host="localhost" Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.255 [INFO][3958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:18.293434 containerd[1599]: 2026-01-23 18:58:18.255 [INFO][3958] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" HandleID="k8s-pod-network.2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Workload="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.293775 containerd[1599]: 2026-01-23 18:58:18.259 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5597745fdc--nd9dw-eth0", GenerateName:"whisker-5597745fdc-", Namespace:"calico-system", SelfLink:"", UID:"53b6d9e4-9482-4a0d-8ab8-6e548b15413b", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5597745fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5597745fdc-nd9dw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied633e158f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:18.293775 containerd[1599]: 2026-01-23 18:58:18.259 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.294061 containerd[1599]: 2026-01-23 18:58:18.259 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied633e158f2 ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.294061 containerd[1599]: 2026-01-23 18:58:18.274 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.294150 containerd[1599]: 2026-01-23 18:58:18.276 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5597745fdc--nd9dw-eth0", GenerateName:"whisker-5597745fdc-", Namespace:"calico-system", SelfLink:"", UID:"53b6d9e4-9482-4a0d-8ab8-6e548b15413b", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5597745fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf", Pod:"whisker-5597745fdc-nd9dw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calied633e158f2", MAC:"46:0f:b1:70:3d:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:18.294278 containerd[1599]: 2026-01-23 18:58:18.287 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" Namespace="calico-system" Pod="whisker-5597745fdc-nd9dw" WorkloadEndpoint="localhost-k8s-whisker--5597745fdc--nd9dw-eth0" Jan 23 18:58:18.396773 containerd[1599]: time="2026-01-23T18:58:18.396600741Z" level=info msg="connecting to shim 2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf" address="unix:///run/containerd/s/076517dd8b1bebee13bcfaabb9dfc19146cb5551747e3cd108f19707f7caf979" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:18.433236 kubelet[2813]: I0123 18:58:18.433104 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e0f712-602d-4383-b896-c9306db83d93" path="/var/lib/kubelet/pods/08e0f712-602d-4383-b896-c9306db83d93/volumes" Jan 23 18:58:18.442053 systemd[1]: Started cri-containerd-2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf.scope - libcontainer container 2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf. Jan 23 18:58:18.454000 audit: BPF prog-id=175 op=LOAD Jan 23 18:58:18.455000 audit: BPF prog-id=176 op=LOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=176 op=UNLOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=177 op=LOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=178 op=LOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=178 op=UNLOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=177 op=UNLOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.455000 audit: BPF prog-id=179 op=LOAD Jan 23 18:58:18.455000 audit[3994]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3982 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264313936326664336562636139393866353931613738363532353164 Jan 23 18:58:18.457900 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:18.542235 containerd[1599]: time="2026-01-23T18:58:18.541692238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5597745fdc-nd9dw,Uid:53b6d9e4-9482-4a0d-8ab8-6e548b15413b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d1962fd3ebca998f591a7865251d342480a05e6ed1c7012711016795e9532cf\"" Jan 23 18:58:18.546577 containerd[1599]: time="2026-01-23T18:58:18.546060085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:18.636183 kubelet[2813]: E0123 18:58:18.636045 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:18.668097 containerd[1599]: time="2026-01-23T18:58:18.668022573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:18.669765 containerd[1599]: time="2026-01-23T18:58:18.669693242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:18.669980 containerd[1599]: time="2026-01-23T18:58:18.669898605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:18.670090 kubelet[2813]: E0123 18:58:18.670057 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:18.670338 kubelet[2813]: E0123 18:58:18.670237 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:18.672953 kubelet[2813]: E0123 18:58:18.672159 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:18.676535 containerd[1599]: time="2026-01-23T18:58:18.676433535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:18.742725 containerd[1599]: time="2026-01-23T18:58:18.742687129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:18.745321 containerd[1599]: time="2026-01-23T18:58:18.745218169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:18.745321 containerd[1599]: time="2026-01-23T18:58:18.745295212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:18.745731 kubelet[2813]: E0123 18:58:18.745700 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:18.746030 kubelet[2813]: E0123 18:58:18.746012 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:18.746747 kubelet[2813]: E0123 18:58:18.746660 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:18.746747 kubelet[2813]: E0123 18:58:18.746702 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:58:18.861000 audit: BPF prog-id=180 op=LOAD Jan 23 18:58:18.861000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd76ff14c0 a2=98 a3=1fffffffffffffff items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.861000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.861000 audit: BPF prog-id=180 op=UNLOAD Jan 23 18:58:18.861000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd76ff1490 a3=0 items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.861000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.861000 audit: BPF prog-id=181 op=LOAD Jan 23 18:58:18.861000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd76ff13a0 a2=94 a3=3 items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.861000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.861000 audit: BPF prog-id=181 op=UNLOAD Jan 23 18:58:18.861000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd76ff13a0 a2=94 a3=3 items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.861000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.861000 audit: BPF prog-id=182 op=LOAD Jan 23 18:58:18.861000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd76ff13e0 a2=94 a3=7ffd76ff15c0 items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.861000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.862000 audit: BPF prog-id=182 op=UNLOAD Jan 23 18:58:18.862000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd76ff13e0 a2=94 a3=7ffd76ff15c0 items=0 ppid=4028 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.862000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:58:18.865000 audit: BPF prog-id=183 op=LOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff55b6df20 a2=98 a3=3 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:18.865000 audit: BPF prog-id=183 op=UNLOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff55b6def0 a3=0 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:18.865000 audit: BPF prog-id=184 op=LOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff55b6dd10 a2=94 a3=54428f items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:18.865000 audit: BPF prog-id=184 op=UNLOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff55b6dd10 a2=94 a3=54428f items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:18.865000 audit: BPF prog-id=185 op=LOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff55b6dd40 a2=94 a3=2 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:18.865000 audit: BPF prog-id=185 op=UNLOAD Jan 23 18:58:18.865000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff55b6dd40 a2=0 a3=2 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:18.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.042000 audit: BPF prog-id=186 op=LOAD Jan 23 18:58:19.042000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff55b6dc00 a2=94 a3=1 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.042000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.042000 audit: BPF prog-id=186 op=UNLOAD Jan 23 18:58:19.042000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff55b6dc00 a2=94 a3=1 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.042000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=187 op=LOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff55b6dbf0 a2=94 a3=4 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=187 op=UNLOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff55b6dbf0 a2=0 a3=4 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=188 op=LOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff55b6da50 a2=94 a3=5 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=188 op=UNLOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff55b6da50 a2=0 a3=5 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=189 op=LOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff55b6dc70 a2=94 a3=6 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.052000 audit: BPF prog-id=189 op=UNLOAD Jan 23 18:58:19.052000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff55b6dc70 a2=0 a3=6 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.052000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.053000 audit: BPF prog-id=190 op=LOAD Jan 23 18:58:19.053000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff55b6d420 a2=94 a3=88 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.053000 audit: BPF prog-id=191 op=LOAD Jan 23 18:58:19.053000 audit[4177]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff55b6d2a0 a2=94 a3=2 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.053000 audit: BPF prog-id=191 op=UNLOAD Jan 23 18:58:19.053000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff55b6d2d0 a2=0 a3=7fff55b6d3d0 items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.053000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.054000 audit: BPF prog-id=190 op=UNLOAD Jan 23 18:58:19.054000 audit[4177]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3d57bd10 a2=0 a3=f33b372ca4e2485e items=0 ppid=4028 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.054000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:58:19.067000 audit: BPF prog-id=192 op=LOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebc723ad0 a2=98 a3=1999999999999999 items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.067000 audit: BPF prog-id=192 op=UNLOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebc723aa0 a3=0 items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.067000 audit: BPF prog-id=193 op=LOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebc7239b0 a2=94 a3=ffff items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.067000 audit: BPF prog-id=193 op=UNLOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebc7239b0 a2=94 a3=ffff items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.067000 audit: BPF prog-id=194 op=LOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebc7239f0 a2=94 a3=7ffebc723bd0 items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.067000 audit: BPF prog-id=194 op=UNLOAD Jan 23 18:58:19.067000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffebc7239f0 a2=94 a3=7ffebc723bd0 items=0 ppid=4028 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.067000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:58:19.157151 systemd-networkd[1510]: vxlan.calico: Link UP Jan 23 18:58:19.157160 systemd-networkd[1510]: vxlan.calico: Gained carrier Jan 23 18:58:19.191000 audit: BPF prog-id=195 op=LOAD Jan 23 18:58:19.191000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff06a86090 a2=98 a3=0 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.191000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=195 op=UNLOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff06a86060 a3=0 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=196 op=LOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff06a85ea0 a2=94 a3=54428f items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=196 op=UNLOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff06a85ea0 a2=94 a3=54428f items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=197 op=LOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff06a85ed0 a2=94 a3=2 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=197 op=UNLOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff06a85ed0 a2=0 a3=2 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=198 op=LOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff06a85c80 a2=94 a3=4 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=198 op=UNLOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff06a85c80 a2=94 a3=4 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=199 op=LOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff06a85d80 a2=94 a3=7fff06a85f00 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.192000 audit: BPF prog-id=199 op=UNLOAD Jan 23 18:58:19.192000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff06a85d80 a2=0 a3=7fff06a85f00 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.193000 audit: BPF prog-id=200 op=LOAD Jan 23 18:58:19.193000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff06a854b0 a2=94 a3=2 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.193000 audit: BPF prog-id=200 op=UNLOAD Jan 23 18:58:19.193000 audit[4207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff06a854b0 a2=0 a3=2 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.193000 audit: BPF prog-id=201 op=LOAD Jan 23 18:58:19.193000 audit[4207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff06a855b0 a2=94 a3=30 items=0 ppid=4028 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.193000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:58:19.204000 audit: BPF prog-id=202 op=LOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc97ee3670 a2=98 a3=0 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.204000 audit: BPF prog-id=202 op=UNLOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc97ee3640 a3=0 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.204000 audit: BPF prog-id=203 op=LOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc97ee3460 a2=94 a3=54428f items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.204000 audit: BPF prog-id=203 op=UNLOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc97ee3460 a2=94 a3=54428f items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.204000 audit: BPF prog-id=204 op=LOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc97ee3490 a2=94 a3=2 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.204000 audit: BPF prog-id=204 op=UNLOAD Jan 23 18:58:19.204000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc97ee3490 a2=0 a3=2 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.374000 audit: BPF prog-id=205 op=LOAD Jan 23 18:58:19.374000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc97ee3350 a2=94 a3=1 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.374000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.374000 audit: BPF prog-id=205 op=UNLOAD Jan 23 18:58:19.374000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc97ee3350 a2=94 a3=1 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.374000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.383000 audit: BPF prog-id=206 op=LOAD Jan 23 18:58:19.383000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc97ee3340 a2=94 a3=4 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.383000 audit: BPF prog-id=206 op=UNLOAD Jan 23 18:58:19.383000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc97ee3340 a2=0 a3=4 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.383000 audit: BPF prog-id=207 op=LOAD Jan 23 18:58:19.383000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc97ee31a0 a2=94 a3=5 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.383000 audit: BPF prog-id=207 op=UNLOAD Jan 23 18:58:19.383000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc97ee31a0 a2=0 a3=5 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.383000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.384000 audit: BPF prog-id=208 op=LOAD Jan 23 18:58:19.384000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc97ee33c0 a2=94 a3=6 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.384000 audit: BPF prog-id=208 op=UNLOAD Jan 23 18:58:19.384000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc97ee33c0 a2=0 a3=6 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.384000 audit: BPF prog-id=209 op=LOAD Jan 23 18:58:19.384000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc97ee2b70 a2=94 a3=88 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.384000 audit: BPF prog-id=210 op=LOAD Jan 23 18:58:19.384000 audit[4214]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc97ee29f0 a2=94 a3=2 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.384000 audit: BPF prog-id=210 op=UNLOAD Jan 23 18:58:19.384000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc97ee2a20 a2=0 a3=7ffc97ee2b20 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.385000 audit: BPF prog-id=209 op=UNLOAD Jan 23 18:58:19.385000 audit[4214]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3ee33d10 a2=0 a3=2e335c0d4ff170 items=0 ppid=4028 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.385000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:58:19.402000 audit: BPF prog-id=201 op=UNLOAD Jan 23 18:58:19.402000 audit[4028]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00099c040 a2=0 a3=0 items=0 ppid=4015 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.402000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 18:58:19.434552 containerd[1599]: time="2026-01-23T18:58:19.434354102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ttvc9,Uid:b84e892a-422a-478b-8739-e473d68e3bdf,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:19.439147 containerd[1599]: time="2026-01-23T18:58:19.439104302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-2gfjg,Uid:a83d5fe0-7502-4b47-a69c-d746b0e6550b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:19.502000 audit[4265]: NETFILTER_CFG table=nat:117 family=2 entries=15 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.502000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff328b3be0 a2=0 a3=7fff328b3bcc items=0 ppid=4028 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.502000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.503000 audit[4264]: NETFILTER_CFG table=mangle:118 family=2 entries=16 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.503000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff2135580 a2=0 a3=562482569000 items=0 ppid=4028 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.503000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.518000 audit[4262]: NETFILTER_CFG table=raw:119 family=2 entries=21 op=nft_register_chain pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.518000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd7c2c6b90 a2=0 a3=7ffd7c2c6b7c items=0 ppid=4028 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.526000 audit[4267]: NETFILTER_CFG table=filter:120 family=2 entries=94 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.526000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fff6dcc3a00 a2=0 a3=55671629f000 items=0 ppid=4028 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.526000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.637782 systemd-networkd[1510]: califd0bf6a000a: Link UP Jan 23 18:58:19.639197 systemd-networkd[1510]: califd0bf6a000a: Gained carrier Jan 23 18:58:19.645268 kubelet[2813]: E0123 18:58:19.645236 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:19.652962 kubelet[2813]: E0123 18:58:19.652930 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:58:19.678601 containerd[1599]: 2026-01-23 18:58:19.515 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ttvc9-eth0 csi-node-driver- calico-system b84e892a-422a-478b-8739-e473d68e3bdf 723 0 2026-01-23 18:58:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ttvc9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califd0bf6a000a [] [] }} ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-" Jan 23 18:58:19.678601 containerd[1599]: 2026-01-23 18:58:19.515 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.678601 containerd[1599]: 2026-01-23 18:58:19.574 [INFO][4276] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" HandleID="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Workload="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.574 [INFO][4276] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" HandleID="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Workload="localhost-k8s-csi--node--driver--ttvc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000515c10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ttvc9", "timestamp":"2026-01-23 18:58:19.574045166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.574 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.575 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.575 [INFO][4276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.586 [INFO][4276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" host="localhost" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.594 [INFO][4276] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.601 [INFO][4276] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.603 [INFO][4276] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.606 [INFO][4276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:19.678918 containerd[1599]: 2026-01-23 18:58:19.606 [INFO][4276] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" host="localhost" Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.609 [INFO][4276] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169 Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.614 [INFO][4276] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" host="localhost" Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4276] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" host="localhost" Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" host="localhost" Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:19.679147 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4276] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" HandleID="k8s-pod-network.d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Workload="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.679289 containerd[1599]: 2026-01-23 18:58:19.630 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ttvc9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b84e892a-422a-478b-8739-e473d68e3bdf", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ttvc9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd0bf6a000a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:19.679373 containerd[1599]: 2026-01-23 18:58:19.630 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.679373 containerd[1599]: 2026-01-23 18:58:19.630 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd0bf6a000a ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.679373 containerd[1599]: 2026-01-23 18:58:19.640 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.679428 containerd[1599]: 2026-01-23 18:58:19.640 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ttvc9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b84e892a-422a-478b-8739-e473d68e3bdf", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169", Pod:"csi-node-driver-ttvc9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd0bf6a000a", MAC:"86:ee:53:f1:fe:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:19.679539 containerd[1599]: 2026-01-23 18:58:19.668 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" Namespace="calico-system" Pod="csi-node-driver-ttvc9" WorkloadEndpoint="localhost-k8s-csi--node--driver--ttvc9-eth0" Jan 23 18:58:19.719000 audit[4321]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:19.719000 audit[4321]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd5b2b4d70 a2=0 a3=7ffd5b2b4d5c items=0 ppid=2952 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:19.731000 audit[4321]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:19.731000 audit[4321]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd5b2b4d70 a2=0 a3=0 items=0 ppid=2952 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:19.746000 audit[4329]: NETFILTER_CFG table=filter:123 family=2 entries=36 op=nft_register_chain pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.746000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffec67de1c0 a2=0 a3=7ffec67de1ac items=0 ppid=4028 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.746000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.751131 containerd[1599]: time="2026-01-23T18:58:19.750955008Z" level=info msg="connecting to shim d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169" address="unix:///run/containerd/s/e64007f1ee57caba93bc5c88c7d01774609fd12c5bf731401e71879a852805e8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:19.768324 systemd-networkd[1510]: calie7799e9132e: Link UP Jan 23 18:58:19.769643 systemd-networkd[1510]: calie7799e9132e: Gained carrier Jan 23 18:58:19.796042 systemd[1]: Started cri-containerd-d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169.scope - libcontainer container d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169. Jan 23 18:58:19.797171 containerd[1599]: 2026-01-23 18:58:19.535 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0 calico-apiserver-6b479c8c46- calico-apiserver a83d5fe0-7502-4b47-a69c-d746b0e6550b 832 0 2026-01-23 18:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b479c8c46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b479c8c46-2gfjg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7799e9132e [] [] }} ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-" Jan 23 18:58:19.797171 containerd[1599]: 2026-01-23 18:58:19.536 [INFO][4239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.797171 containerd[1599]: 2026-01-23 18:58:19.591 [INFO][4284] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" HandleID="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Workload="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.592 [INFO][4284] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" HandleID="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Workload="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b479c8c46-2gfjg", "timestamp":"2026-01-23 18:58:19.5915541 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.592 [INFO][4284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.622 [INFO][4284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.695 [INFO][4284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" host="localhost" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.705 [INFO][4284] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.715 [INFO][4284] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.721 [INFO][4284] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.731 [INFO][4284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:19.797607 containerd[1599]: 2026-01-23 18:58:19.732 [INFO][4284] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" host="localhost" Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.736 [INFO][4284] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9 Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.747 [INFO][4284] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" host="localhost" Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.755 [INFO][4284] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" host="localhost" Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.755 [INFO][4284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" host="localhost" Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.756 [INFO][4284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:19.798642 containerd[1599]: 2026-01-23 18:58:19.756 [INFO][4284] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" HandleID="k8s-pod-network.6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Workload="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.798948 containerd[1599]: 2026-01-23 18:58:19.765 [INFO][4239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0", GenerateName:"calico-apiserver-6b479c8c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83d5fe0-7502-4b47-a69c-d746b0e6550b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b479c8c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b479c8c46-2gfjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7799e9132e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:19.799098 containerd[1599]: 2026-01-23 18:58:19.765 [INFO][4239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.799098 containerd[1599]: 2026-01-23 18:58:19.765 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7799e9132e ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.799098 containerd[1599]: 2026-01-23 18:58:19.770 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.799216 containerd[1599]: 2026-01-23 18:58:19.772 [INFO][4239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0", GenerateName:"calico-apiserver-6b479c8c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a83d5fe0-7502-4b47-a69c-d746b0e6550b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b479c8c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9", Pod:"calico-apiserver-6b479c8c46-2gfjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7799e9132e", MAC:"5e:7f:97:21:54:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:19.799368 containerd[1599]: 2026-01-23 18:58:19.790 [INFO][4239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-2gfjg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--2gfjg-eth0" Jan 23 18:58:19.831000 audit: BPF prog-id=211 op=LOAD Jan 23 18:58:19.832000 audit: BPF prog-id=212 op=LOAD Jan 23 18:58:19.832000 audit[4349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.832000 audit: BPF prog-id=212 op=UNLOAD Jan 23 18:58:19.832000 audit[4349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.832000 audit: BPF prog-id=213 op=LOAD Jan 23 18:58:19.832000 audit[4349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.833000 audit: BPF prog-id=214 op=LOAD Jan 23 18:58:19.833000 audit[4349]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.833000 audit: BPF prog-id=214 op=UNLOAD Jan 23 18:58:19.833000 audit[4349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.833000 audit: BPF prog-id=213 op=UNLOAD Jan 23 18:58:19.833000 audit[4349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.833000 audit: BPF prog-id=215 op=LOAD Jan 23 18:58:19.833000 audit[4349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4337 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439306630356239326239336535326237643136333138656432626166 Jan 23 18:58:19.837736 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:19.840281 containerd[1599]: time="2026-01-23T18:58:19.840190241Z" level=info msg="connecting to shim 6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9" address="unix:///run/containerd/s/4f5b78a3efa067d6ec1f6c9638900213121b5a3d103dc7566a8791cbf8458995" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:19.842000 audit[4381]: NETFILTER_CFG table=filter:124 family=2 entries=60 op=nft_register_chain pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:19.842000 audit[4381]: SYSCALL arch=c000003e syscall=46 success=yes exit=32248 a0=3 a1=7ffea1974da0 a2=0 a3=7ffea1974d8c items=0 ppid=4028 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.842000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:19.889109 systemd[1]: Started cri-containerd-6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9.scope - libcontainer container 6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9. Jan 23 18:58:19.892989 containerd[1599]: time="2026-01-23T18:58:19.892921434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ttvc9,Uid:b84e892a-422a-478b-8739-e473d68e3bdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"d90f05b92b93e52b7d16318ed2bafd850585e92cebe3f2a28d6a35fecd36d169\"" Jan 23 18:58:19.895004 containerd[1599]: time="2026-01-23T18:58:19.894751290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:19.910000 audit: BPF prog-id=216 op=LOAD Jan 23 18:58:19.911000 audit: BPF prog-id=217 op=LOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=217 op=UNLOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=218 op=LOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=219 op=LOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=219 op=UNLOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=218 op=UNLOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.911000 audit: BPF prog-id=220 op=LOAD Jan 23 18:58:19.911000 audit[4404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4391 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:19.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661623361633963333139393035653966363335356463376233653830 Jan 23 18:58:19.913772 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:19.965110 containerd[1599]: time="2026-01-23T18:58:19.965034898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-2gfjg,Uid:a83d5fe0-7502-4b47-a69c-d746b0e6550b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6ab3ac9c319905e9f6355dc7b3e8024463a5420c90f4858d7aab3c882beec9d9\"" Jan 23 18:58:20.006062 containerd[1599]: time="2026-01-23T18:58:20.005730121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:20.007533 containerd[1599]: time="2026-01-23T18:58:20.007455386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:20.007660 containerd[1599]: time="2026-01-23T18:58:20.007558287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:20.007707 kubelet[2813]: E0123 18:58:20.007677 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:20.007757 kubelet[2813]: E0123 18:58:20.007717 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:20.007971 kubelet[2813]: E0123 18:58:20.007942 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:20.008277 containerd[1599]: time="2026-01-23T18:58:20.008166847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:20.073093 containerd[1599]: time="2026-01-23T18:58:20.072987792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:20.074584 containerd[1599]: time="2026-01-23T18:58:20.074410601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:20.074584 containerd[1599]: time="2026-01-23T18:58:20.074524131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:20.074798 kubelet[2813]: E0123 18:58:20.074760 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:20.074798 kubelet[2813]: E0123 18:58:20.074801 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:20.075305 kubelet[2813]: E0123 18:58:20.075123 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-2gfjg_calico-apiserver(a83d5fe0-7502-4b47-a69c-d746b0e6550b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:20.075305 kubelet[2813]: E0123 18:58:20.075180 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:20.075405 containerd[1599]: time="2026-01-23T18:58:20.075259781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:20.150371 containerd[1599]: time="2026-01-23T18:58:20.150122161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:20.151811 containerd[1599]: time="2026-01-23T18:58:20.151714065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:20.152001 containerd[1599]: time="2026-01-23T18:58:20.151871677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:20.152738 kubelet[2813]: E0123 18:58:20.152452 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:20.152738 kubelet[2813]: E0123 18:58:20.152561 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:20.152738 kubelet[2813]: E0123 18:58:20.152684 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:20.152738 kubelet[2813]: E0123 18:58:20.152729 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:20.239095 systemd-networkd[1510]: calied633e158f2: Gained IPv6LL Jan 23 18:58:20.434298 kubelet[2813]: E0123 18:58:20.434212 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:20.435780 containerd[1599]: time="2026-01-23T18:58:20.435724628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c5w5n,Uid:b93fb228-d6f3-4cd2-be25-e621a5f856a3,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:20.439656 containerd[1599]: time="2026-01-23T18:58:20.439518220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5fc65b5f-cz9b8,Uid:cc04cd5b-edb0-48f3-88f2-e7b09f3ee672,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:20.443028 containerd[1599]: time="2026-01-23T18:58:20.442994431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-jn62z,Uid:3d13162b-7811-42a8-ba7e-74957a3844c9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:20.651715 kubelet[2813]: E0123 18:58:20.651661 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:20.657317 systemd-networkd[1510]: cali46c27fc773d: Link UP Jan 23 18:58:20.657911 kubelet[2813]: E0123 18:58:20.657785 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:20.659209 systemd-networkd[1510]: cali46c27fc773d: Gained carrier Jan 23 18:58:20.686462 containerd[1599]: 2026-01-23 18:58:20.533 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--c5w5n-eth0 coredns-66bc5c9577- kube-system b93fb228-d6f3-4cd2-be25-e621a5f856a3 826 0 2026-01-23 18:57:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-c5w5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c27fc773d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-" Jan 23 18:58:20.686462 containerd[1599]: 2026-01-23 18:58:20.533 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.686462 containerd[1599]: 2026-01-23 18:58:20.595 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" HandleID="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Workload="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.595 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" HandleID="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Workload="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027eb20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-c5w5n", "timestamp":"2026-01-23 18:58:20.595292016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.595 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.595 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.595 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.605 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" host="localhost" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.614 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.622 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.624 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.627 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.686750 containerd[1599]: 2026-01-23 18:58:20.627 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" host="localhost" Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.630 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0 Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.635 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" host="localhost" Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.641 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" host="localhost" Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.642 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" host="localhost" Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.642 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:20.687071 containerd[1599]: 2026-01-23 18:58:20.642 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" HandleID="k8s-pod-network.4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Workload="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.651 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c5w5n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93fb228-d6f3-4cd2-be25-e621a5f856a3", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-c5w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c27fc773d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.651 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.651 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c27fc773d ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.660 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.662 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c5w5n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b93fb228-d6f3-4cd2-be25-e621a5f856a3", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0", Pod:"coredns-66bc5c9577-c5w5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c27fc773d", MAC:"ae:7c:cc:b8:35:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.687199 containerd[1599]: 2026-01-23 18:58:20.677 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" Namespace="kube-system" Pod="coredns-66bc5c9577-c5w5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c5w5n-eth0" Jan 23 18:58:20.714000 audit[4516]: NETFILTER_CFG table=filter:125 family=2 entries=46 op=nft_register_chain pid=4516 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:20.714000 audit[4516]: SYSCALL arch=c000003e syscall=46 success=yes exit=23724 a0=3 a1=7fff444a7e70 a2=0 a3=7fff444a7e5c items=0 ppid=4028 pid=4516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.714000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:20.736951 containerd[1599]: time="2026-01-23T18:58:20.736680492Z" level=info msg="connecting to shim 4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0" address="unix:///run/containerd/s/7c5ec779aeb56fa336b15aba0696569e577f3a469a651708b737528107fd61d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:20.769975 systemd-networkd[1510]: cali7fe5027542c: Link UP Jan 23 18:58:20.771425 systemd-networkd[1510]: cali7fe5027542c: Gained carrier Jan 23 18:58:20.773000 audit[4539]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:20.773000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc75ebc820 a2=0 a3=7ffc75ebc80c items=0 ppid=2952 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:20.778000 audit[4539]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:20.778000 audit[4539]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc75ebc820 a2=0 a3=0 items=0 ppid=2952 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:20.792066 systemd[1]: Started cri-containerd-4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0.scope - libcontainer container 4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0. Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.530 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0 calico-kube-controllers-d5fc65b5f- calico-system cc04cd5b-edb0-48f3-88f2-e7b09f3ee672 824 0 2026-01-23 18:58:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d5fc65b5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d5fc65b5f-cz9b8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7fe5027542c [] [] }} ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.530 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.607 [INFO][4483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" HandleID="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Workload="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.607 [INFO][4483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" HandleID="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Workload="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002903d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d5fc65b5f-cz9b8", "timestamp":"2026-01-23 18:58:20.607206109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.607 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.642 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.642 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.706 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.717 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.729 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.732 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.736 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.736 [INFO][4483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.741 [INFO][4483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1 Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.749 [INFO][4483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.758 [INFO][4483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.759 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" host="localhost" Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.759 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:20.800184 containerd[1599]: 2026-01-23 18:58:20.759 [INFO][4483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" HandleID="k8s-pod-network.cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Workload="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.765 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0", GenerateName:"calico-kube-controllers-d5fc65b5f-", Namespace:"calico-system", SelfLink:"", UID:"cc04cd5b-edb0-48f3-88f2-e7b09f3ee672", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5fc65b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d5fc65b5f-cz9b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fe5027542c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.765 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.765 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fe5027542c ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.773 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.776 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0", GenerateName:"calico-kube-controllers-d5fc65b5f-", Namespace:"calico-system", SelfLink:"", UID:"cc04cd5b-edb0-48f3-88f2-e7b09f3ee672", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5fc65b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1", Pod:"calico-kube-controllers-d5fc65b5f-cz9b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7fe5027542c", MAC:"02:2d:9b:a8:72:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.801273 containerd[1599]: 2026-01-23 18:58:20.790 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" Namespace="calico-system" Pod="calico-kube-controllers-d5fc65b5f-cz9b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5fc65b5f--cz9b8-eth0" Jan 23 18:58:20.812000 audit: BPF prog-id=221 op=LOAD Jan 23 18:58:20.812000 audit: BPF prog-id=222 op=LOAD Jan 23 18:58:20.812000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.812000 audit: BPF prog-id=222 op=UNLOAD Jan 23 18:58:20.812000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.813000 audit: BPF prog-id=223 op=LOAD Jan 23 18:58:20.813000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.813000 audit: BPF prog-id=224 op=LOAD Jan 23 18:58:20.813000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.814000 audit: BPF prog-id=224 op=UNLOAD Jan 23 18:58:20.814000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.814000 audit: BPF prog-id=223 op=UNLOAD Jan 23 18:58:20.814000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.814000 audit: BPF prog-id=225 op=LOAD Jan 23 18:58:20.814000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4526 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464383866303936636234663463663933343231613534356436646665 Jan 23 18:58:20.818328 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:20.827000 audit[4566]: NETFILTER_CFG table=filter:128 family=2 entries=44 op=nft_register_chain pid=4566 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:20.827000 audit[4566]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7fff9b4610e0 a2=0 a3=7fff9b4610cc items=0 ppid=4028 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.827000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:20.850771 containerd[1599]: time="2026-01-23T18:58:20.850092112Z" level=info msg="connecting to shim cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1" address="unix:///run/containerd/s/a6a51779756bf035f7bc20d41cf9e1b348c01302bbfb1e4639fd5f2583cbc3b1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:20.874377 systemd-networkd[1510]: cali2d7a3a7c436: Link UP Jan 23 18:58:20.875915 systemd-networkd[1510]: cali2d7a3a7c436: Gained carrier Jan 23 18:58:20.879560 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Jan 23 18:58:20.885163 containerd[1599]: time="2026-01-23T18:58:20.884452681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c5w5n,Uid:b93fb228-d6f3-4cd2-be25-e621a5f856a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0\"" Jan 23 18:58:20.889651 kubelet[2813]: E0123 18:58:20.888769 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:20.895653 containerd[1599]: time="2026-01-23T18:58:20.895623110Z" level=info msg="CreateContainer within sandbox \"4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.541 [INFO][4461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0 calico-apiserver-6b479c8c46- calico-apiserver 3d13162b-7811-42a8-ba7e-74957a3844c9 835 0 2026-01-23 18:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b479c8c46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b479c8c46-jn62z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d7a3a7c436 [] [] }} ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.541 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.607 [INFO][4496] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" HandleID="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Workload="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.609 [INFO][4496] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" HandleID="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Workload="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b479c8c46-jn62z", "timestamp":"2026-01-23 18:58:20.607962016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.609 [INFO][4496] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.759 [INFO][4496] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.759 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.807 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.819 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.834 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.838 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.841 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.841 [INFO][4496] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.843 [INFO][4496] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.849 [INFO][4496] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.863 [INFO][4496] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.863 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" host="localhost" Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.863 [INFO][4496] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:20.907741 containerd[1599]: 2026-01-23 18:58:20.864 [INFO][4496] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" HandleID="k8s-pod-network.40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Workload="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.868 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0", GenerateName:"calico-apiserver-6b479c8c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d13162b-7811-42a8-ba7e-74957a3844c9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b479c8c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b479c8c46-jn62z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d7a3a7c436", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.868 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.868 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d7a3a7c436 ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.876 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.877 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0", GenerateName:"calico-apiserver-6b479c8c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d13162b-7811-42a8-ba7e-74957a3844c9", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b479c8c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac", Pod:"calico-apiserver-6b479c8c46-jn62z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d7a3a7c436", MAC:"1e:31:09:e9:05:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:20.908387 containerd[1599]: 2026-01-23 18:58:20.900 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" Namespace="calico-apiserver" Pod="calico-apiserver-6b479c8c46-jn62z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b479c8c46--jn62z-eth0" Jan 23 18:58:20.915354 containerd[1599]: time="2026-01-23T18:58:20.915294580Z" level=info msg="Container 0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:20.917270 systemd[1]: Started cri-containerd-cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1.scope - libcontainer container cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1. Jan 23 18:58:20.928160 containerd[1599]: time="2026-01-23T18:58:20.928086748Z" level=info msg="CreateContainer within sandbox \"4d88f096cb4f4cf93421a545d6dfeb48ecd37c39f6706845b2a0f0d393f07ab0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491\"" Jan 23 18:58:20.930677 containerd[1599]: time="2026-01-23T18:58:20.929653154Z" level=info msg="StartContainer for \"0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491\"" Jan 23 18:58:20.933021 containerd[1599]: time="2026-01-23T18:58:20.932789813Z" level=info msg="connecting to shim 0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491" address="unix:///run/containerd/s/7c5ec779aeb56fa336b15aba0696569e577f3a469a651708b737528107fd61d8" protocol=ttrpc version=3 Jan 23 18:58:20.943078 systemd-networkd[1510]: calie7799e9132e: Gained IPv6LL Jan 23 18:58:20.948000 audit[4620]: NETFILTER_CFG table=filter:129 family=2 entries=49 op=nft_register_chain pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:20.948000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7fff724913f0 a2=0 a3=7fff724913dc items=0 ppid=4028 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.948000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:20.955213 containerd[1599]: time="2026-01-23T18:58:20.954993100Z" level=info msg="connecting to shim 40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac" address="unix:///run/containerd/s/38ecef80ec407d425f83af3d8ea8fe7c627bbc0160d246425431a0358fd6fe2d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:20.965022 systemd[1]: Started cri-containerd-0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491.scope - libcontainer container 0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491. Jan 23 18:58:20.967000 audit: BPF prog-id=226 op=LOAD Jan 23 18:58:20.968000 audit: BPF prog-id=227 op=LOAD Jan 23 18:58:20.968000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.968000 audit: BPF prog-id=227 op=UNLOAD Jan 23 18:58:20.968000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.969000 audit: BPF prog-id=228 op=LOAD Jan 23 18:58:20.969000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.969000 audit: BPF prog-id=229 op=LOAD Jan 23 18:58:20.969000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.969000 audit: BPF prog-id=229 op=UNLOAD Jan 23 18:58:20.969000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.969000 audit: BPF prog-id=228 op=UNLOAD Jan 23 18:58:20.969000 audit[4593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.969000 audit: BPF prog-id=230 op=LOAD Jan 23 18:58:20.969000 audit[4593]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=4574 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361666435616635303633393136326137383335386161346361626662 Jan 23 18:58:20.972563 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:20.985303 systemd[1]: Started cri-containerd-40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac.scope - libcontainer container 40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac. Jan 23 18:58:20.993000 audit: BPF prog-id=231 op=LOAD Jan 23 18:58:20.993000 audit: BPF prog-id=232 op=LOAD Jan 23 18:58:20.993000 audit[4622]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.993000 audit: BPF prog-id=232 op=UNLOAD Jan 23 18:58:20.993000 audit[4622]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.993000 audit: BPF prog-id=233 op=LOAD Jan 23 18:58:20.993000 audit[4622]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.993000 audit: BPF prog-id=234 op=LOAD Jan 23 18:58:20.993000 audit[4622]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.994000 audit: BPF prog-id=234 op=UNLOAD Jan 23 18:58:20.994000 audit[4622]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.994000 audit: BPF prog-id=233 op=UNLOAD Jan 23 18:58:20.994000 audit[4622]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:20.994000 audit: BPF prog-id=235 op=LOAD Jan 23 18:58:20.994000 audit[4622]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4526 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:20.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386361373430333963343435626433393334353536356266666336 Jan 23 18:58:21.012000 audit: BPF prog-id=236 op=LOAD Jan 23 18:58:21.014000 audit: BPF prog-id=237 op=LOAD Jan 23 18:58:21.014000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.014000 audit: BPF prog-id=237 op=UNLOAD Jan 23 18:58:21.014000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.014000 audit: BPF prog-id=238 op=LOAD Jan 23 18:58:21.014000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.014000 audit: BPF prog-id=239 op=LOAD Jan 23 18:58:21.014000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.015000 audit: BPF prog-id=239 op=UNLOAD Jan 23 18:58:21.015000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.015000 audit: BPF prog-id=238 op=UNLOAD Jan 23 18:58:21.015000 audit[4652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.015000 audit: BPF prog-id=240 op=LOAD Jan 23 18:58:21.015000 audit[4652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4635 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430646564363964343962666332666566333963616463336230623338 Jan 23 18:58:21.018019 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:21.033967 containerd[1599]: time="2026-01-23T18:58:21.033904689Z" level=info msg="StartContainer for \"0e8ca74039c445bd39345565bffc636e21b79404b12ebc70207923f945c77491\" returns successfully" Jan 23 18:58:21.049921 containerd[1599]: time="2026-01-23T18:58:21.049886285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5fc65b5f-cz9b8,Uid:cc04cd5b-edb0-48f3-88f2-e7b09f3ee672,Namespace:calico-system,Attempt:0,} returns sandbox id \"cafd5af50639162a78358aa4cabfbaf056cf370c0c45d618ec7e3796b742a4a1\"" Jan 23 18:58:21.054040 containerd[1599]: time="2026-01-23T18:58:21.053457555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:21.101647 containerd[1599]: time="2026-01-23T18:58:21.101590411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b479c8c46-jn62z,Uid:3d13162b-7811-42a8-ba7e-74957a3844c9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"40ded69d49bfc2fef39cadc3b0b38a2067416eb2ff0058b0a4e8999449992aac\"" Jan 23 18:58:21.141381 containerd[1599]: time="2026-01-23T18:58:21.141270579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:21.142973 containerd[1599]: time="2026-01-23T18:58:21.142716479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:21.142973 containerd[1599]: time="2026-01-23T18:58:21.142800537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:21.143445 kubelet[2813]: E0123 18:58:21.143382 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:21.143445 kubelet[2813]: E0123 18:58:21.143431 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:21.143694 kubelet[2813]: E0123 18:58:21.143668 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d5fc65b5f-cz9b8_calico-system(cc04cd5b-edb0-48f3-88f2-e7b09f3ee672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:21.143729 kubelet[2813]: E0123 18:58:21.143705 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:21.144386 containerd[1599]: time="2026-01-23T18:58:21.144358677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:21.222304 containerd[1599]: time="2026-01-23T18:58:21.221996450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:21.223754 containerd[1599]: time="2026-01-23T18:58:21.223651395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:21.223754 containerd[1599]: time="2026-01-23T18:58:21.223726035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:21.224087 kubelet[2813]: E0123 18:58:21.223969 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:21.224087 kubelet[2813]: E0123 18:58:21.224020 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:21.224426 kubelet[2813]: E0123 18:58:21.224291 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-jn62z_calico-apiserver(3d13162b-7811-42a8-ba7e-74957a3844c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:21.224426 kubelet[2813]: E0123 18:58:21.224337 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:21.434351 containerd[1599]: time="2026-01-23T18:58:21.434293842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lwrn2,Uid:16aed12c-3493-44d9-ad22-ad901db963e9,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:21.519082 systemd-networkd[1510]: califd0bf6a000a: Gained IPv6LL Jan 23 18:58:21.566626 systemd-networkd[1510]: cali876cd065c8a: Link UP Jan 23 18:58:21.567611 systemd-networkd[1510]: cali876cd065c8a: Gained carrier Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.484 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--lwrn2-eth0 goldmane-7c778bb748- calico-system 16aed12c-3493-44d9-ad22-ad901db963e9 836 0 2026-01-23 18:57:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-lwrn2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali876cd065c8a [] [] }} ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.484 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.517 [INFO][4721] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" HandleID="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Workload="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.518 [INFO][4721] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" HandleID="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Workload="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001306b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-lwrn2", "timestamp":"2026-01-23 18:58:21.517757665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.518 [INFO][4721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.518 [INFO][4721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.518 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.530 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.536 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.541 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.544 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.546 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.546 [INFO][4721] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.548 [INFO][4721] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8 Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.552 [INFO][4721] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.559 [INFO][4721] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.559 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" host="localhost" Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.559 [INFO][4721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:21.580046 containerd[1599]: 2026-01-23 18:58:21.559 [INFO][4721] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" HandleID="k8s-pod-network.a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Workload="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.563 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--lwrn2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16aed12c-3493-44d9-ad22-ad901db963e9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-lwrn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali876cd065c8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.563 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.563 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali876cd065c8a ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.567 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.568 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--lwrn2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16aed12c-3493-44d9-ad22-ad901db963e9", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8", Pod:"goldmane-7c778bb748-lwrn2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali876cd065c8a", MAC:"ea:a2:ce:30:97:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:21.581319 containerd[1599]: 2026-01-23 18:58:21.576 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" Namespace="calico-system" Pod="goldmane-7c778bb748-lwrn2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--lwrn2-eth0" Jan 23 18:58:21.602000 audit[4739]: NETFILTER_CFG table=filter:130 family=2 entries=60 op=nft_register_chain pid=4739 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:21.602000 audit[4739]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7fffa53a6650 a2=0 a3=7fffa53a663c items=0 ppid=4028 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:21.618051 containerd[1599]: time="2026-01-23T18:58:21.618009740Z" level=info msg="connecting to shim a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8" address="unix:///run/containerd/s/a252f2b916136bef12e4c51c1230b4ae201bca5fccccb07d9017489cda7d5ffd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:21.663779 kubelet[2813]: E0123 18:58:21.663667 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:21.666002 kubelet[2813]: E0123 18:58:21.665980 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:21.683411 systemd[1]: Started cri-containerd-a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8.scope - libcontainer container a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8. Jan 23 18:58:21.696793 kubelet[2813]: E0123 18:58:21.696401 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:21.702655 kubelet[2813]: E0123 18:58:21.702554 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:21.702655 kubelet[2813]: E0123 18:58:21.702197 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:21.725000 audit: BPF prog-id=241 op=LOAD Jan 23 18:58:21.726000 audit: BPF prog-id=242 op=LOAD Jan 23 18:58:21.726000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.726000 audit: BPF prog-id=242 op=UNLOAD Jan 23 18:58:21.726000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.727000 audit: BPF prog-id=243 op=LOAD Jan 23 18:58:21.727000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.728000 audit: BPF prog-id=244 op=LOAD Jan 23 18:58:21.728000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.728000 audit: BPF prog-id=244 op=UNLOAD Jan 23 18:58:21.728000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.728000 audit: BPF prog-id=243 op=UNLOAD Jan 23 18:58:21.728000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.728000 audit: BPF prog-id=245 op=LOAD Jan 23 18:58:21.728000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4747 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136323763616466376233613962656665363935613232383066366231 Jan 23 18:58:21.732085 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:21.765676 kubelet[2813]: I0123 18:58:21.765097 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c5w5n" podStartSLOduration=36.765081867 podStartE2EDuration="36.765081867s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:21.747627314 +0000 UTC m=+41.481017475" watchObservedRunningTime="2026-01-23 18:58:21.765081867 +0000 UTC m=+41.498472027" Jan 23 18:58:21.796181 containerd[1599]: time="2026-01-23T18:58:21.795990611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lwrn2,Uid:16aed12c-3493-44d9-ad22-ad901db963e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a627cadf7b3a9befe695a2280f6b1a0cdeca4cfd749e17691d74fd4717c0edf8\"" Jan 23 18:58:21.799434 containerd[1599]: time="2026-01-23T18:58:21.799304301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:21.804000 audit[4786]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:21.808666 kernel: kauditd_printk_skb: 409 callbacks suppressed Jan 23 18:58:21.808731 kernel: audit: type=1325 audit(1769194701.804:708): table=filter:131 family=2 entries=17 op=nft_register_rule pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:21.804000 audit[4786]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff45a95680 a2=0 a3=7fff45a9566c items=0 ppid=2952 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.829009 kernel: audit: type=1300 audit(1769194701.804:708): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff45a95680 a2=0 a3=7fff45a9566c items=0 ppid=2952 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:21.835180 kernel: audit: type=1327 audit(1769194701.804:708): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:21.835000 audit[4786]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:21.835000 audit[4786]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff45a95680 a2=0 a3=7fff45a9566c items=0 ppid=2952 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.857104 kernel: audit: type=1325 audit(1769194701.835:709): table=nat:132 family=2 entries=35 op=nft_register_chain pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:21.857262 kernel: audit: type=1300 audit(1769194701.835:709): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff45a95680 a2=0 a3=7fff45a9566c items=0 ppid=2952 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:21.857290 kernel: audit: type=1327 audit(1769194701.835:709): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:21.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:21.880221 containerd[1599]: time="2026-01-23T18:58:21.880049757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:21.882000 containerd[1599]: time="2026-01-23T18:58:21.881929177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:21.882000 containerd[1599]: time="2026-01-23T18:58:21.881981051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:21.882329 kubelet[2813]: E0123 18:58:21.882258 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:21.882329 kubelet[2813]: E0123 18:58:21.882299 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:21.882429 kubelet[2813]: E0123 18:58:21.882409 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lwrn2_calico-system(16aed12c-3493-44d9-ad22-ad901db963e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:21.882457 kubelet[2813]: E0123 18:58:21.882439 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:21.903612 systemd-networkd[1510]: cali46c27fc773d: Gained IPv6LL Jan 23 18:58:21.967112 systemd-networkd[1510]: cali2d7a3a7c436: Gained IPv6LL Jan 23 18:58:22.435746 kubelet[2813]: E0123 18:58:22.435551 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:22.436704 containerd[1599]: time="2026-01-23T18:58:22.436198841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j5l7h,Uid:e954cb68-155f-4961-939e-caf1b1372055,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:22.480036 systemd-networkd[1510]: cali7fe5027542c: Gained IPv6LL Jan 23 18:58:22.585037 systemd-networkd[1510]: calie99003836e3: Link UP Jan 23 18:58:22.586341 systemd-networkd[1510]: calie99003836e3: Gained carrier Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.489 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--j5l7h-eth0 coredns-66bc5c9577- kube-system e954cb68-155f-4961-939e-caf1b1372055 837 0 2026-01-23 18:57:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-j5l7h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie99003836e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.489 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.526 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" HandleID="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Workload="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.526 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" HandleID="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Workload="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001357e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-j5l7h", "timestamp":"2026-01-23 18:58:22.526232971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.526 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.526 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.526 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.538 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.545 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.552 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.555 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.558 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.558 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.560 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.571 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.578 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.578 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" host="localhost" Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.578 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:22.606546 containerd[1599]: 2026-01-23 18:58:22.578 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" HandleID="k8s-pod-network.608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Workload="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.581 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j5l7h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e954cb68-155f-4961-939e-caf1b1372055", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-j5l7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99003836e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.581 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.581 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie99003836e3 ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.586 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.587 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--j5l7h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e954cb68-155f-4961-939e-caf1b1372055", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a", Pod:"coredns-66bc5c9577-j5l7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99003836e3", MAC:"02:fa:e5:e8:c2:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:22.609635 containerd[1599]: 2026-01-23 18:58:22.602 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" Namespace="kube-system" Pod="coredns-66bc5c9577-j5l7h" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--j5l7h-eth0" Jan 23 18:58:22.630000 audit[4819]: NETFILTER_CFG table=filter:133 family=2 entries=36 op=nft_register_chain pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:22.637452 containerd[1599]: time="2026-01-23T18:58:22.637361339Z" level=info msg="connecting to shim 608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a" address="unix:///run/containerd/s/47e2c38a5a52ba7dbe6840922052ffb6e7028f1663f44a31e11945c9856fa65b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:22.643793 kernel: audit: type=1325 audit(1769194702.630:710): table=filter:133 family=2 entries=36 op=nft_register_chain pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:58:22.644575 kernel: audit: type=1300 audit(1769194702.630:710): arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffc379d5870 a2=0 a3=7ffc379d585c items=0 ppid=4028 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.630000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffc379d5870 a2=0 a3=7ffc379d585c items=0 ppid=4028 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.630000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:22.673798 kernel: audit: type=1327 audit(1769194702.630:710): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:58:22.687701 kubelet[2813]: E0123 18:58:22.687198 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:22.688719 kubelet[2813]: E0123 18:58:22.688394 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:22.689184 kubelet[2813]: E0123 18:58:22.689161 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:22.690289 kubelet[2813]: E0123 18:58:22.690215 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:22.694373 systemd[1]: Started cri-containerd-608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a.scope - libcontainer container 608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a. Jan 23 18:58:22.716000 audit: BPF prog-id=246 op=LOAD Jan 23 18:58:22.717000 audit: BPF prog-id=247 op=LOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=247 op=UNLOAD Jan 23 18:58:22.721311 kernel: audit: type=1334 audit(1769194702.716:711): prog-id=246 op=LOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=248 op=LOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=249 op=LOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=249 op=UNLOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=248 op=UNLOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.717000 audit: BPF prog-id=250 op=LOAD Jan 23 18:58:22.717000 audit[4838]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4827 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630386638396363663834373534613431306632363331653036323235 Jan 23 18:58:22.721056 systemd-resolved[1288]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:58:22.767757 containerd[1599]: time="2026-01-23T18:58:22.767684462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j5l7h,Uid:e954cb68-155f-4961-939e-caf1b1372055,Namespace:kube-system,Attempt:0,} returns sandbox id \"608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a\"" Jan 23 18:58:22.768582 kubelet[2813]: E0123 18:58:22.768536 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:22.772909 containerd[1599]: time="2026-01-23T18:58:22.772866718Z" level=info msg="CreateContainer within sandbox \"608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:22.784392 containerd[1599]: time="2026-01-23T18:58:22.784307357Z" level=info msg="Container d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:22.788325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564637833.mount: Deactivated successfully. Jan 23 18:58:22.792265 containerd[1599]: time="2026-01-23T18:58:22.792203005Z" level=info msg="CreateContainer within sandbox \"608f89ccf84754a410f2631e062254cd23fc69bc744e5889c58ebbd2c8381b3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072\"" Jan 23 18:58:22.793036 containerd[1599]: time="2026-01-23T18:58:22.792964263Z" level=info msg="StartContainer for \"d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072\"" Jan 23 18:58:22.793975 containerd[1599]: time="2026-01-23T18:58:22.793894536Z" level=info msg="connecting to shim d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072" address="unix:///run/containerd/s/47e2c38a5a52ba7dbe6840922052ffb6e7028f1663f44a31e11945c9856fa65b" protocol=ttrpc version=3 Jan 23 18:58:22.832282 systemd[1]: Started cri-containerd-d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072.scope - libcontainer container d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072. Jan 23 18:58:22.846000 audit: BPF prog-id=251 op=LOAD Jan 23 18:58:22.847000 audit: BPF prog-id=252 op=LOAD Jan 23 18:58:22.847000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.847000 audit: BPF prog-id=252 op=UNLOAD Jan 23 18:58:22.847000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.847000 audit: BPF prog-id=253 op=LOAD Jan 23 18:58:22.847000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.848000 audit: BPF prog-id=254 op=LOAD Jan 23 18:58:22.848000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.848000 audit: BPF prog-id=254 op=UNLOAD Jan 23 18:58:22.848000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.848000 audit: BPF prog-id=253 op=UNLOAD Jan 23 18:58:22.848000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.848000 audit: BPF prog-id=255 op=LOAD Jan 23 18:58:22.848000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4827 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439373263353162363033633239663863663532643435666434613639 Jan 23 18:58:22.873049 containerd[1599]: time="2026-01-23T18:58:22.872948414Z" level=info msg="StartContainer for \"d972c51b603c29f8cf52d45fd4a69fef8d17321e829ddf9699f9cca6bbc6e072\" returns successfully" Jan 23 18:58:22.888000 audit[4895]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:22.888000 audit[4895]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4eb34b30 a2=0 a3=7fff4eb34b1c items=0 ppid=2952 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:22.899000 audit[4895]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=4895 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:22.899000 audit[4895]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff4eb34b30 a2=0 a3=7fff4eb34b1c items=0 ppid=2952 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:22.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:23.183102 systemd-networkd[1510]: cali876cd065c8a: Gained IPv6LL Jan 23 18:58:23.692341 kubelet[2813]: E0123 18:58:23.692172 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:23.692341 kubelet[2813]: E0123 18:58:23.692258 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:23.693749 kubelet[2813]: E0123 18:58:23.693143 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:23.706050 kubelet[2813]: I0123 18:58:23.706000 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j5l7h" podStartSLOduration=38.705984693 podStartE2EDuration="38.705984693s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:23.704697735 +0000 UTC m=+43.438087896" watchObservedRunningTime="2026-01-23 18:58:23.705984693 +0000 UTC m=+43.439374854" Jan 23 18:58:23.922000 audit[4900]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:23.922000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc78f9a60 a2=0 a3=7ffcc78f9a4c items=0 ppid=2952 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:23.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:23.927000 audit[4900]: NETFILTER_CFG table=nat:137 family=2 entries=44 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:23.927000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcc78f9a60 a2=0 a3=7ffcc78f9a4c items=0 ppid=2952 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:23.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:24.080098 systemd-networkd[1510]: calie99003836e3: Gained IPv6LL Jan 23 18:58:24.698909 kubelet[2813]: E0123 18:58:24.698788 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:24.955000 audit[4908]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:24.955000 audit[4908]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6412f1d0 a2=0 a3=7ffc6412f1bc items=0 ppid=2952 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:24.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:24.980000 audit[4908]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:58:24.980000 audit[4908]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc6412f1d0 a2=0 a3=7ffc6412f1bc items=0 ppid=2952 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:24.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:58:25.701254 kubelet[2813]: E0123 18:58:25.701128 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:26.703187 kubelet[2813]: E0123 18:58:26.703112 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:33.432301 containerd[1599]: time="2026-01-23T18:58:33.431958995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:33.500301 containerd[1599]: time="2026-01-23T18:58:33.500201100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:33.501739 containerd[1599]: time="2026-01-23T18:58:33.501590622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:33.501739 containerd[1599]: time="2026-01-23T18:58:33.501641661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:33.502150 kubelet[2813]: E0123 18:58:33.501978 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:33.502150 kubelet[2813]: E0123 18:58:33.502037 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:33.502453 containerd[1599]: time="2026-01-23T18:58:33.502223185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:33.502790 kubelet[2813]: E0123 18:58:33.502382 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d5fc65b5f-cz9b8_calico-system(cc04cd5b-edb0-48f3-88f2-e7b09f3ee672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:33.503088 kubelet[2813]: E0123 18:58:33.502923 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:33.566764 containerd[1599]: time="2026-01-23T18:58:33.566660789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:33.568333 containerd[1599]: time="2026-01-23T18:58:33.568286681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:33.568455 containerd[1599]: time="2026-01-23T18:58:33.568365558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:33.568784 kubelet[2813]: E0123 18:58:33.568686 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:33.568784 kubelet[2813]: E0123 18:58:33.568740 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:33.568926 kubelet[2813]: E0123 18:58:33.568786 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:33.569795 containerd[1599]: time="2026-01-23T18:58:33.569650912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:33.646209 containerd[1599]: time="2026-01-23T18:58:33.646031391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:33.647435 containerd[1599]: time="2026-01-23T18:58:33.647371791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:33.647533 containerd[1599]: time="2026-01-23T18:58:33.647440409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:33.647779 kubelet[2813]: E0123 18:58:33.647714 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:33.647779 kubelet[2813]: E0123 18:58:33.647743 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:33.647926 kubelet[2813]: E0123 18:58:33.647785 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:33.647926 kubelet[2813]: E0123 18:58:33.647896 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:58:34.430878 containerd[1599]: time="2026-01-23T18:58:34.430682652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:34.491003 containerd[1599]: time="2026-01-23T18:58:34.490980592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:34.492702 containerd[1599]: time="2026-01-23T18:58:34.492633815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:34.492702 containerd[1599]: time="2026-01-23T18:58:34.492679461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:34.493113 kubelet[2813]: E0123 18:58:34.493025 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:34.493113 kubelet[2813]: E0123 18:58:34.493084 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:34.493205 kubelet[2813]: E0123 18:58:34.493133 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-jn62z_calico-apiserver(3d13162b-7811-42a8-ba7e-74957a3844c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:34.493205 kubelet[2813]: E0123 18:58:34.493158 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:35.431051 containerd[1599]: time="2026-01-23T18:58:35.431020705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:35.492369 containerd[1599]: time="2026-01-23T18:58:35.492277866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:35.493983 containerd[1599]: time="2026-01-23T18:58:35.493919207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:35.494059 containerd[1599]: time="2026-01-23T18:58:35.494033895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:35.494272 kubelet[2813]: E0123 18:58:35.494113 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:35.494272 kubelet[2813]: E0123 18:58:35.494180 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:35.494272 kubelet[2813]: E0123 18:58:35.494245 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-2gfjg_calico-apiserver(a83d5fe0-7502-4b47-a69c-d746b0e6550b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:35.494608 kubelet[2813]: E0123 18:58:35.494273 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:36.431248 containerd[1599]: time="2026-01-23T18:58:36.431053731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:36.497386 containerd[1599]: time="2026-01-23T18:58:36.497202178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:36.498819 containerd[1599]: time="2026-01-23T18:58:36.498633567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:36.498819 containerd[1599]: time="2026-01-23T18:58:36.498675736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:36.499020 kubelet[2813]: E0123 18:58:36.498923 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:36.499020 kubelet[2813]: E0123 18:58:36.498986 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:36.499695 kubelet[2813]: E0123 18:58:36.499038 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:36.500536 containerd[1599]: time="2026-01-23T18:58:36.500441263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:36.557884 containerd[1599]: time="2026-01-23T18:58:36.557683488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:36.559232 containerd[1599]: time="2026-01-23T18:58:36.559107133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:36.559232 containerd[1599]: time="2026-01-23T18:58:36.559148744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:36.559624 kubelet[2813]: E0123 18:58:36.559448 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:36.559624 kubelet[2813]: E0123 18:58:36.559544 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:36.559624 kubelet[2813]: E0123 18:58:36.559626 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:36.559796 kubelet[2813]: E0123 18:58:36.559677 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:37.431656 containerd[1599]: time="2026-01-23T18:58:37.431434345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:37.496373 containerd[1599]: time="2026-01-23T18:58:37.496237602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:37.497983 containerd[1599]: time="2026-01-23T18:58:37.497946190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:37.498456 containerd[1599]: time="2026-01-23T18:58:37.498014448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:37.498531 kubelet[2813]: E0123 18:58:37.498314 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:37.498531 kubelet[2813]: E0123 18:58:37.498367 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:37.498695 kubelet[2813]: E0123 18:58:37.498666 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lwrn2_calico-system(16aed12c-3493-44d9-ad22-ad901db963e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:37.498927 kubelet[2813]: E0123 18:58:37.498775 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:39.332356 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:46346.service - OpenSSH per-connection server daemon (10.0.0.1:46346). Jan 23 18:58:39.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:39.335384 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 23 18:58:39.335460 kernel: audit: type=1130 audit(1769194719.331:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:39.440000 audit[4926]: USER_ACCT pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.441909 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 46346 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:39.444120 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:39.449922 systemd-logind[1577]: New session 11 of user core. Jan 23 18:58:39.440000 audit[4926]: CRED_ACQ pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.462241 kernel: audit: type=1101 audit(1769194719.440:734): pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.462289 kernel: audit: type=1103 audit(1769194719.440:735): pid=4926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.462313 kernel: audit: type=1006 audit(1769194719.440:736): pid=4926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 18:58:39.440000 audit[4926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde895500 a2=3 a3=0 items=0 ppid=1 pid=4926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:39.481108 kernel: audit: type=1300 audit(1769194719.440:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde895500 a2=3 a3=0 items=0 ppid=1 pid=4926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:39.481152 kernel: audit: type=1327 audit(1769194719.440:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:39.440000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:39.491026 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:58:39.493000 audit[4926]: USER_START pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.495000 audit[4930]: CRED_ACQ pid=4930 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.520229 kernel: audit: type=1105 audit(1769194719.493:737): pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.520269 kernel: audit: type=1103 audit(1769194719.495:738): pid=4930 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.609774 sshd[4930]: Connection closed by 10.0.0.1 port 46346 Jan 23 18:58:39.610232 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:39.611000 audit[4926]: USER_END pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.611000 audit[4926]: CRED_DISP pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.627411 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:46346.service: Deactivated successfully. Jan 23 18:58:39.629947 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:58:39.631881 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:58:39.634244 systemd-logind[1577]: Removed session 11. Jan 23 18:58:39.634600 kernel: audit: type=1106 audit(1769194719.611:739): pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.635019 kernel: audit: type=1104 audit(1769194719.611:740): pid=4926 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:39.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.151:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:44.622391 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:40808.service - OpenSSH per-connection server daemon (10.0.0.1:40808). Jan 23 18:58:44.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:40808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:44.624907 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:58:44.624963 kernel: audit: type=1130 audit(1769194724.621:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:40808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:44.694000 audit[4963]: USER_ACCT pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.695925 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 40808 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:44.699120 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:44.706605 systemd-logind[1577]: New session 12 of user core. Jan 23 18:58:44.696000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.721799 kernel: audit: type=1101 audit(1769194724.694:743): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.721931 kernel: audit: type=1103 audit(1769194724.696:744): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.721975 kernel: audit: type=1006 audit(1769194724.697:745): pid=4963 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 18:58:44.697000 audit[4963]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda95f7d70 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:44.742132 kernel: audit: type=1300 audit(1769194724.697:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda95f7d70 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:44.697000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:44.747037 kernel: audit: type=1327 audit(1769194724.697:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:44.753113 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:58:44.755000 audit[4963]: USER_START pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.758000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.791721 kernel: audit: type=1105 audit(1769194724.755:746): pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.792334 kernel: audit: type=1103 audit(1769194724.758:747): pid=4967 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.848157 sshd[4967]: Connection closed by 10.0.0.1 port 40808 Jan 23 18:58:44.849024 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:44.851000 audit[4963]: USER_END pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.855169 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:40808.service: Deactivated successfully. Jan 23 18:58:44.857117 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:58:44.860280 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:58:44.863338 systemd-logind[1577]: Removed session 12. Jan 23 18:58:44.851000 audit[4963]: CRED_DISP pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.875451 kernel: audit: type=1106 audit(1769194724.851:748): pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.875628 kernel: audit: type=1104 audit(1769194724.851:749): pid=4963 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:44.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.151:22-10.0.0.1:40808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:45.431120 kubelet[2813]: E0123 18:58:45.430789 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:46.434582 kubelet[2813]: E0123 18:58:46.434516 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:58:47.431504 kubelet[2813]: E0123 18:58:47.431392 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:58:49.431737 kubelet[2813]: E0123 18:58:49.431567 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:58:49.784465 kubelet[2813]: E0123 18:58:49.784152 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:58:49.862249 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:40816.service - OpenSSH per-connection server daemon (10.0.0.1:40816). Jan 23 18:58:49.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:40816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:49.865102 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:58:49.865178 kernel: audit: type=1130 audit(1769194729.861:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:40816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:49.941000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:49.942199 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 40816 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:49.944269 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:49.949934 systemd-logind[1577]: New session 13 of user core. Jan 23 18:58:49.942000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:49.962607 kernel: audit: type=1101 audit(1769194729.941:752): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:49.962660 kernel: audit: type=1103 audit(1769194729.942:753): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:49.962693 kernel: audit: type=1006 audit(1769194729.942:754): pid=5011 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 18:58:49.942000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8f5455a0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:49.979764 kernel: audit: type=1300 audit(1769194729.942:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8f5455a0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:49.979880 kernel: audit: type=1327 audit(1769194729.942:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:49.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:49.986070 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:58:49.988000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:49.990000 audit[5016]: CRED_ACQ pid=5016 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.013918 kernel: audit: type=1105 audit(1769194729.988:755): pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.014048 kernel: audit: type=1103 audit(1769194729.990:756): pid=5016 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.097079 sshd[5016]: Connection closed by 10.0.0.1 port 40816 Jan 23 18:58:50.097371 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:50.098000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.098000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.122343 kernel: audit: type=1106 audit(1769194730.098:757): pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.122401 kernel: audit: type=1104 audit(1769194730.098:758): pid=5011 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.127991 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:40816.service: Deactivated successfully. Jan 23 18:58:50.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.151:22-10.0.0.1:40816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:50.130169 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:58:50.131355 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:58:50.134633 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:40828.service - OpenSSH per-connection server daemon (10.0.0.1:40828). Jan 23 18:58:50.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.151:22-10.0.0.1:40828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:50.135660 systemd-logind[1577]: Removed session 13. Jan 23 18:58:50.195000 audit[5030]: USER_ACCT pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.197020 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 40828 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:50.197000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.197000 audit[5030]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7da161a0 a2=3 a3=0 items=0 ppid=1 pid=5030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:50.197000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:50.199237 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:50.205162 systemd-logind[1577]: New session 14 of user core. Jan 23 18:58:50.215008 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:58:50.217000 audit[5030]: USER_START pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.219000 audit[5034]: CRED_ACQ pid=5034 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.331753 sshd[5034]: Connection closed by 10.0.0.1 port 40828 Jan 23 18:58:50.331275 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:50.333000 audit[5030]: USER_END pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.333000 audit[5030]: CRED_DISP pid=5030 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.342132 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:40828.service: Deactivated successfully. Jan 23 18:58:50.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.151:22-10.0.0.1:40828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:50.346600 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:58:50.350274 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:58:50.357062 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:40842.service - OpenSSH per-connection server daemon (10.0.0.1:40842). Jan 23 18:58:50.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.151:22-10.0.0.1:40842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:50.362312 systemd-logind[1577]: Removed session 14. Jan 23 18:58:50.471000 audit[5046]: USER_ACCT pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.472197 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 40842 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:50.472000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.472000 audit[5046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6ebb27c0 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:50.472000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:50.475010 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:50.482239 systemd-logind[1577]: New session 15 of user core. Jan 23 18:58:50.496052 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:58:50.499000 audit[5046]: USER_START pid=5046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.501000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.593118 sshd[5050]: Connection closed by 10.0.0.1 port 40842 Jan 23 18:58:50.593392 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:50.594000 audit[5046]: USER_END pid=5046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.594000 audit[5046]: CRED_DISP pid=5046 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:50.598392 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:40842.service: Deactivated successfully. Jan 23 18:58:50.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.151:22-10.0.0.1:40842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:50.600717 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:58:50.602227 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:58:50.604097 systemd-logind[1577]: Removed session 15. Jan 23 18:58:51.432680 kubelet[2813]: E0123 18:58:51.432520 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:58:52.431201 kubelet[2813]: E0123 18:58:52.431067 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:58:55.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:50806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:55.614690 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:50806.service - OpenSSH per-connection server daemon (10.0.0.1:50806). Jan 23 18:58:55.617362 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 18:58:55.617476 kernel: audit: type=1130 audit(1769194735.613:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:50806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:55.690000 audit[5068]: USER_ACCT pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.691532 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 50806 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:58:55.693511 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:55.700448 systemd-logind[1577]: New session 16 of user core. Jan 23 18:58:55.691000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.716158 kernel: audit: type=1101 audit(1769194735.690:779): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.716229 kernel: audit: type=1103 audit(1769194735.691:780): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.716246 kernel: audit: type=1006 audit(1769194735.691:781): pid=5068 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 18:58:55.691000 audit[5068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbe99e890 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:55.735991 kernel: audit: type=1300 audit(1769194735.691:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbe99e890 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:58:55.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:55.741192 kernel: audit: type=1327 audit(1769194735.691:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:58:55.751189 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:58:55.754000 audit[5068]: USER_START pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.756000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.783174 kernel: audit: type=1105 audit(1769194735.754:782): pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.783222 kernel: audit: type=1103 audit(1769194735.756:783): pid=5072 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.859668 sshd[5072]: Connection closed by 10.0.0.1 port 50806 Jan 23 18:58:55.860050 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:55.860000 audit[5068]: USER_END pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.865570 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:50806.service: Deactivated successfully. Jan 23 18:58:55.867777 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:58:55.868967 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:58:55.870638 systemd-logind[1577]: Removed session 16. Jan 23 18:58:55.861000 audit[5068]: CRED_DISP pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.885508 kernel: audit: type=1106 audit(1769194735.860:784): pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.885608 kernel: audit: type=1104 audit(1769194735.861:785): pid=5068 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:58:55.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.151:22-10.0.0.1:50806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:58:57.431016 containerd[1599]: time="2026-01-23T18:58:57.430969121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:57.491552 containerd[1599]: time="2026-01-23T18:58:57.491465020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:57.493126 containerd[1599]: time="2026-01-23T18:58:57.492971633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:57.493126 containerd[1599]: time="2026-01-23T18:58:57.493055540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:57.493241 kubelet[2813]: E0123 18:58:57.493219 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:57.493587 kubelet[2813]: E0123 18:58:57.493251 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:57.493587 kubelet[2813]: E0123 18:58:57.493388 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:57.494039 containerd[1599]: time="2026-01-23T18:58:57.493970002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:57.554776 containerd[1599]: time="2026-01-23T18:58:57.554690242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:57.556099 containerd[1599]: time="2026-01-23T18:58:57.555988128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:57.556099 containerd[1599]: time="2026-01-23T18:58:57.556027786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:57.556320 kubelet[2813]: E0123 18:58:57.556252 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:57.556320 kubelet[2813]: E0123 18:58:57.556314 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:57.556573 kubelet[2813]: E0123 18:58:57.556517 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d5fc65b5f-cz9b8_calico-system(cc04cd5b-edb0-48f3-88f2-e7b09f3ee672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:57.556608 kubelet[2813]: E0123 18:58:57.556578 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:58:57.556742 containerd[1599]: time="2026-01-23T18:58:57.556684695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:57.641490 containerd[1599]: time="2026-01-23T18:58:57.641376171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:57.642879 containerd[1599]: time="2026-01-23T18:58:57.642708951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:57.642879 containerd[1599]: time="2026-01-23T18:58:57.642785824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:58:57.643122 kubelet[2813]: E0123 18:58:57.643053 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:57.643122 kubelet[2813]: E0123 18:58:57.643115 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:57.643264 kubelet[2813]: E0123 18:58:57.643208 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5597745fdc-nd9dw_calico-system(53b6d9e4-9482-4a0d-8ab8-6e548b15413b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:57.643264 kubelet[2813]: E0123 18:58:57.643243 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:59:00.430342 kubelet[2813]: E0123 18:59:00.430229 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:00.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.151:22-10.0.0.1:50814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:00.878570 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:50814.service - OpenSSH per-connection server daemon (10.0.0.1:50814). Jan 23 18:59:00.881464 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:59:00.881514 kernel: audit: type=1130 audit(1769194740.877:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.151:22-10.0.0.1:50814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:00.989000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:00.991176 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 50814 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:01.003242 kernel: audit: type=1101 audit(1769194740.989:788): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.003310 kernel: audit: type=1103 audit(1769194741.002:789): pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.002000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.004257 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:01.011715 systemd-logind[1577]: New session 17 of user core. Jan 23 18:59:01.019764 kernel: audit: type=1006 audit(1769194741.002:790): pid=5091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 18:59:01.021916 kernel: audit: type=1300 audit(1769194741.002:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff82a630 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:01.002000 audit[5091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff82a630 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:01.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:01.037262 kernel: audit: type=1327 audit(1769194741.002:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:01.038174 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:59:01.043000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.060183 kernel: audit: type=1105 audit(1769194741.043:791): pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.060252 kernel: audit: type=1103 audit(1769194741.057:792): pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.057000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.196801 sshd[5095]: Connection closed by 10.0.0.1 port 50814 Jan 23 18:59:01.196718 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:01.198000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.202516 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:50814.service: Deactivated successfully. Jan 23 18:59:01.205079 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:59:01.206325 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:59:01.207936 systemd-logind[1577]: Removed session 17. Jan 23 18:59:01.198000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.221657 kernel: audit: type=1106 audit(1769194741.198:793): pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.221709 kernel: audit: type=1104 audit(1769194741.198:794): pid=5091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:01.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.151:22-10.0.0.1:50814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:01.434171 containerd[1599]: time="2026-01-23T18:59:01.433933788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:01.522945 containerd[1599]: time="2026-01-23T18:59:01.522741841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:01.524888 containerd[1599]: time="2026-01-23T18:59:01.524690081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:01.524888 containerd[1599]: time="2026-01-23T18:59:01.524741492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:59:01.525259 kubelet[2813]: E0123 18:59:01.525087 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:01.525259 kubelet[2813]: E0123 18:59:01.525157 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:01.525259 kubelet[2813]: E0123 18:59:01.525220 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-jn62z_calico-apiserver(3d13162b-7811-42a8-ba7e-74957a3844c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:01.525259 kubelet[2813]: E0123 18:59:01.525248 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:59:02.433694 containerd[1599]: time="2026-01-23T18:59:02.433568188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:59:02.493111 containerd[1599]: time="2026-01-23T18:59:02.493051871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:02.494770 containerd[1599]: time="2026-01-23T18:59:02.494698785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:59:02.494883 containerd[1599]: time="2026-01-23T18:59:02.494772333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:59:02.495132 kubelet[2813]: E0123 18:59:02.495017 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:02.495132 kubelet[2813]: E0123 18:59:02.495075 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:02.495273 kubelet[2813]: E0123 18:59:02.495155 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:02.496636 containerd[1599]: time="2026-01-23T18:59:02.496587864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:59:02.553691 containerd[1599]: time="2026-01-23T18:59:02.553659629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:02.555158 containerd[1599]: time="2026-01-23T18:59:02.555024953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:59:02.555158 containerd[1599]: time="2026-01-23T18:59:02.555124183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:59:02.555274 kubelet[2813]: E0123 18:59:02.555224 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:02.555274 kubelet[2813]: E0123 18:59:02.555261 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:02.555983 kubelet[2813]: E0123 18:59:02.555353 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ttvc9_calico-system(b84e892a-422a-478b-8739-e473d68e3bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:02.555983 kubelet[2813]: E0123 18:59:02.555430 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:59:03.430471 kubelet[2813]: E0123 18:59:03.430318 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:03.432441 containerd[1599]: time="2026-01-23T18:59:03.432328680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:59:03.489886 containerd[1599]: time="2026-01-23T18:59:03.489608729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:03.491667 containerd[1599]: time="2026-01-23T18:59:03.491567208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:59:03.491725 containerd[1599]: time="2026-01-23T18:59:03.491602850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:59:03.491839 kubelet[2813]: E0123 18:59:03.491768 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:59:03.491895 kubelet[2813]: E0123 18:59:03.491801 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:59:03.492172 kubelet[2813]: E0123 18:59:03.492100 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lwrn2_calico-system(16aed12c-3493-44d9-ad22-ad901db963e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:03.492454 kubelet[2813]: E0123 18:59:03.492212 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:59:03.492503 containerd[1599]: time="2026-01-23T18:59:03.492308483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:03.561282 containerd[1599]: time="2026-01-23T18:59:03.561188793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:03.562965 containerd[1599]: time="2026-01-23T18:59:03.562897253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:03.562965 containerd[1599]: time="2026-01-23T18:59:03.562947655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:59:03.563376 kubelet[2813]: E0123 18:59:03.563307 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:03.563376 kubelet[2813]: E0123 18:59:03.563350 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:03.563702 kubelet[2813]: E0123 18:59:03.563506 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b479c8c46-2gfjg_calico-apiserver(a83d5fe0-7502-4b47-a69c-d746b0e6550b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:03.563702 kubelet[2813]: E0123 18:59:03.563550 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:59:06.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.151:22-10.0.0.1:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:06.216358 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:36714.service - OpenSSH per-connection server daemon (10.0.0.1:36714). Jan 23 18:59:06.219935 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:59:06.219972 kernel: audit: type=1130 audit(1769194746.215:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.151:22-10.0.0.1:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:06.289000 audit[5110]: USER_ACCT pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.291075 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 36714 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:06.293781 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:06.301015 systemd-logind[1577]: New session 18 of user core. Jan 23 18:59:06.291000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.312216 kernel: audit: type=1101 audit(1769194746.289:797): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.312338 kernel: audit: type=1103 audit(1769194746.291:798): pid=5110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.312366 kernel: audit: type=1006 audit(1769194746.291:799): pid=5110 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 18:59:06.291000 audit[5110]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee4f270d0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:06.330263 kernel: audit: type=1300 audit(1769194746.291:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee4f270d0 a2=3 a3=0 items=0 ppid=1 pid=5110 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:06.330309 kernel: audit: type=1327 audit(1769194746.291:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:06.291000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:06.335194 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:59:06.337000 audit[5110]: USER_START pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.340000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.361691 kernel: audit: type=1105 audit(1769194746.337:800): pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.361744 kernel: audit: type=1103 audit(1769194746.340:801): pid=5114 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.432648 sshd[5114]: Connection closed by 10.0.0.1 port 36714 Jan 23 18:59:06.433047 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:06.434000 audit[5110]: USER_END pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.449059 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:36714.service: Deactivated successfully. Jan 23 18:59:06.434000 audit[5110]: CRED_DISP pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.454595 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:59:06.458911 kernel: audit: type=1106 audit(1769194746.434:802): pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.459171 kernel: audit: type=1104 audit(1769194746.434:803): pid=5110 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:06.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.151:22-10.0.0.1:36714 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:06.460544 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:59:06.462154 systemd-logind[1577]: Removed session 18. Jan 23 18:59:10.439350 kubelet[2813]: E0123 18:59:10.438920 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:59:11.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.151:22-10.0.0.1:36730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:11.447496 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Jan 23 18:59:11.449900 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:59:11.449943 kernel: audit: type=1130 audit(1769194751.446:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.151:22-10.0.0.1:36730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:11.518000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.519925 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:11.522940 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:11.532012 kernel: audit: type=1101 audit(1769194751.518:806): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.520000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.542973 kernel: audit: type=1103 audit(1769194751.520:807): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.548319 systemd-logind[1577]: New session 19 of user core. Jan 23 18:59:11.561970 kernel: audit: type=1006 audit(1769194751.520:808): pid=5127 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 18:59:11.562020 kernel: audit: type=1300 audit(1769194751.520:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc30329100 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:11.520000 audit[5127]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc30329100 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:11.567320 kernel: audit: type=1327 audit(1769194751.520:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:11.520000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:11.569070 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:59:11.575000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.579000 audit[5131]: CRED_ACQ pid=5131 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.603462 kernel: audit: type=1105 audit(1769194751.575:809): pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.603593 kernel: audit: type=1103 audit(1769194751.579:810): pid=5131 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.696065 sshd[5131]: Connection closed by 10.0.0.1 port 36730 Jan 23 18:59:11.696735 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:11.697000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.697000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.721097 kernel: audit: type=1106 audit(1769194751.697:811): pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.721203 kernel: audit: type=1104 audit(1769194751.697:812): pid=5127 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.727083 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:36730.service: Deactivated successfully. Jan 23 18:59:11.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.151:22-10.0.0.1:36730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:11.729157 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:59:11.730713 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:59:11.734763 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:36742.service - OpenSSH per-connection server daemon (10.0.0.1:36742). Jan 23 18:59:11.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.151:22-10.0.0.1:36742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:11.735682 systemd-logind[1577]: Removed session 19. Jan 23 18:59:11.798000 audit[5145]: USER_ACCT pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.799472 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 36742 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:11.799000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.799000 audit[5145]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4e614dd0 a2=3 a3=0 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:11.799000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:11.801462 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:11.808264 systemd-logind[1577]: New session 20 of user core. Jan 23 18:59:11.818028 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:59:11.820000 audit[5145]: USER_START pid=5145 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:11.822000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.050010 sshd[5149]: Connection closed by 10.0.0.1 port 36742 Jan 23 18:59:12.050993 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:12.052000 audit[5145]: USER_END pid=5145 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.052000 audit[5145]: CRED_DISP pid=5145 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.062331 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:36742.service: Deactivated successfully. Jan 23 18:59:12.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.151:22-10.0.0.1:36742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:12.065620 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:59:12.066981 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:59:12.071889 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:36748.service - OpenSSH per-connection server daemon (10.0.0.1:36748). Jan 23 18:59:12.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.151:22-10.0.0.1:36748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:12.073213 systemd-logind[1577]: Removed session 20. Jan 23 18:59:12.131000 audit[5161]: USER_ACCT pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.132207 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 36748 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:12.132000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.133000 audit[5161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd73b7fcb0 a2=3 a3=0 items=0 ppid=1 pid=5161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:12.133000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:12.135238 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:12.142812 systemd-logind[1577]: New session 21 of user core. Jan 23 18:59:12.150141 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:59:12.153000 audit[5161]: USER_START pid=5161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.155000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.430930 kubelet[2813]: E0123 18:59:12.430721 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:12.434242 kubelet[2813]: E0123 18:59:12.434199 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:59:12.693761 sshd[5165]: Connection closed by 10.0.0.1 port 36748 Jan 23 18:59:12.695107 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:12.702000 audit[5161]: USER_END pid=5161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.702000 audit[5161]: CRED_DISP pid=5161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.713213 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:43290.service - OpenSSH per-connection server daemon (10.0.0.1:43290). Jan 23 18:59:12.713986 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:36748.service: Deactivated successfully. Jan 23 18:59:12.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.151:22-10.0.0.1:43290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:12.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.151:22-10.0.0.1:36748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:12.718180 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:59:12.721974 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:59:12.724357 systemd-logind[1577]: Removed session 21. Jan 23 18:59:12.745000 audit[5184]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:12.745000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffe2761100 a2=0 a3=7fffe27610ec items=0 ppid=2952 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:12.745000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:12.753000 audit[5184]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:12.753000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffe2761100 a2=0 a3=0 items=0 ppid=2952 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:12.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:12.785000 audit[5180]: USER_ACCT pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.786765 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:12.787000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.787000 audit[5180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb05d25f0 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:12.787000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:12.789600 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:12.796238 systemd-logind[1577]: New session 22 of user core. Jan 23 18:59:12.809050 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:59:12.811000 audit[5180]: USER_START pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:12.813000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.013674 sshd[5188]: Connection closed by 10.0.0.1 port 43290 Jan 23 18:59:13.015618 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:13.017000 audit[5180]: USER_END pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.017000 audit[5180]: CRED_DISP pid=5180 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.026708 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:43290.service: Deactivated successfully. Jan 23 18:59:13.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.151:22-10.0.0.1:43290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:13.031439 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:59:13.037595 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:59:13.040177 systemd-logind[1577]: Removed session 22. Jan 23 18:59:13.043209 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). Jan 23 18:59:13.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.151:22-10.0.0.1:43294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:13.114000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.115159 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:13.115000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.116000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff069c5e70 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:13.116000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:13.118485 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:13.125559 systemd-logind[1577]: New session 23 of user core. Jan 23 18:59:13.133253 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:59:13.136000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.138000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.241735 sshd[5203]: Connection closed by 10.0.0.1 port 43294 Jan 23 18:59:13.242118 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:13.244000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.244000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:13.248210 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:59:13.252517 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:43294.service: Deactivated successfully. Jan 23 18:59:13.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.151:22-10.0.0.1:43294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:13.256623 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:59:13.260529 systemd-logind[1577]: Removed session 23. Jan 23 18:59:13.772000 audit[5216]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:13.772000 audit[5216]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeadc565b0 a2=0 a3=7ffeadc5659c items=0 ppid=2952 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:13.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:13.780000 audit[5216]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:13.780000 audit[5216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeadc565b0 a2=0 a3=0 items=0 ppid=2952 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:13.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:16.433017 kubelet[2813]: E0123 18:59:16.432516 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:59:16.434324 kubelet[2813]: E0123 18:59:16.434286 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf" Jan 23 18:59:17.430875 kubelet[2813]: E0123 18:59:17.430775 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lwrn2" podUID="16aed12c-3493-44d9-ad22-ad901db963e9" Jan 23 18:59:17.431418 kubelet[2813]: E0123 18:59:17.431302 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-2gfjg" podUID="a83d5fe0-7502-4b47-a69c-d746b0e6550b" Jan 23 18:59:18.259417 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:43298.service - OpenSSH per-connection server daemon (10.0.0.1:43298). Jan 23 18:59:18.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.151:22-10.0.0.1:43298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:18.262877 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 18:59:18.263033 kernel: audit: type=1130 audit(1769194758.258:854): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.151:22-10.0.0.1:43298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:18.325000 audit[5220]: USER_ACCT pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.327020 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 43298 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:18.329973 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:18.335732 systemd-logind[1577]: New session 24 of user core. Jan 23 18:59:18.327000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.347205 kernel: audit: type=1101 audit(1769194758.325:855): pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.347342 kernel: audit: type=1103 audit(1769194758.327:856): pid=5220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.353478 kernel: audit: type=1006 audit(1769194758.327:857): pid=5220 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 18:59:18.365585 kernel: audit: type=1300 audit(1769194758.327:857): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe97348430 a2=3 a3=0 items=0 ppid=1 pid=5220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:18.327000 audit[5220]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe97348430 a2=3 a3=0 items=0 ppid=1 pid=5220 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:18.370032 kernel: audit: type=1327 audit(1769194758.327:857): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:18.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:18.372209 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:59:18.385000 audit[5220]: USER_START pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.400877 kernel: audit: type=1105 audit(1769194758.385:858): pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.389000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.419866 kernel: audit: type=1103 audit(1769194758.389:859): pid=5226 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.431013 kubelet[2813]: E0123 18:59:18.430901 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:18.524715 sshd[5226]: Connection closed by 10.0.0.1 port 43298 Jan 23 18:59:18.525196 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:18.527000 audit[5220]: USER_END pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.533458 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:43298.service: Deactivated successfully. Jan 23 18:59:18.536987 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:59:18.527000 audit[5220]: CRED_DISP pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.541210 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:59:18.544076 systemd-logind[1577]: Removed session 24. Jan 23 18:59:18.551154 kernel: audit: type=1106 audit(1769194758.527:860): pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.551261 kernel: audit: type=1104 audit(1769194758.527:861): pid=5220 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:18.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.151:22-10.0.0.1:43298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:19.254000 audit[5239]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:19.254000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe067f3760 a2=0 a3=7ffe067f374c items=0 ppid=2952 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:19.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:19.265000 audit[5239]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:59:19.265000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe067f3760 a2=0 a3=7ffe067f374c items=0 ppid=2952 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:19.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:59:21.431612 kubelet[2813]: E0123 18:59:21.431536 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5597745fdc-nd9dw" podUID="53b6d9e4-9482-4a0d-8ab8-6e548b15413b" Jan 23 18:59:23.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.151:22-10.0.0.1:52566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:23.543497 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:52566.service - OpenSSH per-connection server daemon (10.0.0.1:52566). Jan 23 18:59:23.555641 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 18:59:23.555731 kernel: audit: type=1130 audit(1769194763.542:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.151:22-10.0.0.1:52566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:23.632000 audit[5268]: USER_ACCT pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.633456 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 52566 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:23.635954 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:23.633000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.654611 kernel: audit: type=1101 audit(1769194763.632:866): pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.654665 kernel: audit: type=1103 audit(1769194763.633:867): pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.655212 kernel: audit: type=1006 audit(1769194763.633:868): pid=5268 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 18:59:23.633000 audit[5268]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69798940 a2=3 a3=0 items=0 ppid=1 pid=5268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:23.664929 systemd-logind[1577]: New session 25 of user core. Jan 23 18:59:23.674078 kernel: audit: type=1300 audit(1769194763.633:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69798940 a2=3 a3=0 items=0 ppid=1 pid=5268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:23.674130 kernel: audit: type=1327 audit(1769194763.633:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:23.633000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:23.692121 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:59:23.695000 audit[5268]: USER_START pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.700000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.720426 kernel: audit: type=1105 audit(1769194763.695:869): pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.720480 kernel: audit: type=1103 audit(1769194763.700:870): pid=5272 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.817155 sshd[5272]: Connection closed by 10.0.0.1 port 52566 Jan 23 18:59:23.819038 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:23.827000 audit[5268]: USER_END pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.832606 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:59:23.835603 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:52566.service: Deactivated successfully. Jan 23 18:59:23.840268 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:59:23.842929 kernel: audit: type=1106 audit(1769194763.827:871): pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.845205 systemd-logind[1577]: Removed session 25. Jan 23 18:59:23.827000 audit[5268]: CRED_DISP pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.861929 kernel: audit: type=1104 audit(1769194763.827:872): pid=5268 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:23.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.151:22-10.0.0.1:52566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:26.434174 kubelet[2813]: E0123 18:59:26.434041 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d5fc65b5f-cz9b8" podUID="cc04cd5b-edb0-48f3-88f2-e7b09f3ee672" Jan 23 18:59:27.429537 kubelet[2813]: E0123 18:59:27.429348 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:59:28.833904 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:52578.service - OpenSSH per-connection server daemon (10.0.0.1:52578). Jan 23 18:59:28.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.151:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:28.836567 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:59:28.836664 kernel: audit: type=1130 audit(1769194768.833:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.151:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:28.917000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.918287 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 52578 ssh2: RSA SHA256:0X6B7DwjmiBFupsAjwsBg4ER2ifZOi9WgN/zn8neR6U Jan 23 18:59:28.922238 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:28.927987 systemd-logind[1577]: New session 26 of user core. Jan 23 18:59:28.919000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.938699 kernel: audit: type=1101 audit(1769194768.917:875): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.938761 kernel: audit: type=1103 audit(1769194768.919:876): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.938794 kernel: audit: type=1006 audit(1769194768.919:877): pid=5286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 18:59:28.919000 audit[5286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd59065340 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:28.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:28.957006 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:59:28.960716 kernel: audit: type=1300 audit(1769194768.919:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd59065340 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:59:28.960757 kernel: audit: type=1327 audit(1769194768.919:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:59:28.961000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.961000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.986702 kernel: audit: type=1105 audit(1769194768.961:878): pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:28.986754 kernel: audit: type=1103 audit(1769194768.961:879): pid=5290 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:29.054054 sshd[5290]: Connection closed by 10.0.0.1 port 52578 Jan 23 18:59:29.054367 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:29.055000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:29.059282 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:52578.service: Deactivated successfully. Jan 23 18:59:29.061754 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:59:29.063007 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:59:29.064628 systemd-logind[1577]: Removed session 26. Jan 23 18:59:29.055000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:29.078413 kernel: audit: type=1106 audit(1769194769.055:880): pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:29.078462 kernel: audit: type=1104 audit(1769194769.055:881): pid=5286 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:59:29.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.151:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:59:30.434897 kubelet[2813]: E0123 18:59:30.434043 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b479c8c46-jn62z" podUID="3d13162b-7811-42a8-ba7e-74957a3844c9" Jan 23 18:59:30.437773 kubelet[2813]: E0123 18:59:30.437694 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ttvc9" podUID="b84e892a-422a-478b-8739-e473d68e3bdf"