Jan 15 05:52:37.823356 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 15 03:08:43 -00 2026 Jan 15 05:52:37.823396 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:52:37.823501 kernel: BIOS-provided physical RAM map: Jan 15 05:52:37.823519 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 05:52:37.823528 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 05:52:37.823538 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 05:52:37.823549 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 05:52:37.823559 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 05:52:37.823652 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 05:52:37.823664 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 05:52:37.823674 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 15 05:52:37.823688 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 05:52:37.823698 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 05:52:37.823708 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 05:52:37.823720 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 05:52:37.823731 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 05:52:37.823826 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 05:52:37.823838 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 05:52:37.823849 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 05:52:37.823859 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 05:52:37.823870 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 05:52:37.823881 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 05:52:37.823891 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 05:52:37.823902 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 05:52:37.823912 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 05:52:37.823923 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 05:52:37.823938 kernel: NX (Execute Disable) protection: active Jan 15 05:52:37.823948 kernel: APIC: Static calls initialized Jan 15 05:52:37.823959 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 15 05:52:37.823970 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 15 05:52:37.823980 kernel: extended physical RAM map: Jan 15 05:52:37.823991 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 05:52:37.824001 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 05:52:37.824012 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 05:52:37.824022 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 05:52:37.824033 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 05:52:37.824044 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 05:52:37.824058 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 05:52:37.824069 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 15 05:52:37.824080 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 15 05:52:37.824095 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 15 05:52:37.824110 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 15 05:52:37.824122 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 15 05:52:37.824133 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 05:52:37.824144 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 05:52:37.824156 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 05:52:37.824329 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 05:52:37.824344 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 05:52:37.824356 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 05:52:37.824367 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 05:52:37.824383 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 05:52:37.824395 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 05:52:37.824406 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 05:52:37.824510 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 05:52:37.824522 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 05:52:37.824533 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 05:52:37.824544 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 05:52:37.824556 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 05:52:37.824647 kernel: efi: EFI v2.7 by EDK II Jan 15 05:52:37.824661 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 15 05:52:37.824748 kernel: random: crng init done Jan 15 05:52:37.824765 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 15 05:52:37.824853 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 15 05:52:37.824866 kernel: secureboot: Secure boot disabled Jan 15 05:52:37.824877 kernel: SMBIOS 2.8 present. Jan 15 05:52:37.824888 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 15 05:52:37.824900 kernel: DMI: Memory slots populated: 1/1 Jan 15 05:52:37.824911 kernel: Hypervisor detected: KVM Jan 15 05:52:37.824922 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 05:52:37.824933 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 05:52:37.824944 kernel: kvm-clock: using sched offset of 62203087481 cycles Jan 15 05:52:37.824957 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 05:52:37.824973 kernel: tsc: Detected 2445.426 MHz processor Jan 15 05:52:37.824985 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 05:52:37.824997 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 05:52:37.825008 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 05:52:37.825020 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 15 05:52:37.825032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 05:52:37.825043 kernel: Using GB pages for direct mapping Jan 15 05:52:37.825059 kernel: ACPI: Early table checksum verification disabled Jan 15 05:52:37.825070 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 15 05:52:37.825082 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 15 05:52:37.825094 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825106 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825117 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 15 05:52:37.825129 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825144 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825157 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825330 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:52:37.825345 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 15 05:52:37.825357 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 15 05:52:37.825369 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 15 05:52:37.825381 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 15 05:52:37.825397 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 15 05:52:37.825498 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 15 05:52:37.825512 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 15 05:52:37.825524 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 15 05:52:37.825535 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 15 05:52:37.825547 kernel: No NUMA configuration found Jan 15 05:52:37.825559 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 15 05:52:37.825571 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 15 05:52:37.825587 kernel: Zone ranges: Jan 15 05:52:37.825599 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 05:52:37.825611 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 15 05:52:37.825622 kernel: Normal empty Jan 15 05:52:37.825634 kernel: Device empty Jan 15 05:52:37.825646 kernel: Movable zone start for each node Jan 15 05:52:37.825657 kernel: Early memory node ranges Jan 15 05:52:37.825668 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 15 05:52:37.825769 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 15 05:52:37.825783 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 15 05:52:37.825794 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 15 05:52:37.825806 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 15 05:52:37.825818 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 15 05:52:37.825829 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 15 05:52:37.825841 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 15 05:52:37.825937 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 15 05:52:37.825950 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 05:52:37.825973 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 15 05:52:37.825989 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 15 05:52:37.826001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 05:52:37.826013 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 15 05:52:37.826025 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 15 05:52:37.826037 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 15 05:52:37.826049 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 15 05:52:37.826062 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 15 05:52:37.826078 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 05:52:37.826090 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 05:52:37.826102 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 05:52:37.826118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 05:52:37.826130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 05:52:37.826143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 05:52:37.826155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 05:52:37.826328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 05:52:37.826345 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 05:52:37.826357 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 05:52:37.826374 kernel: TSC deadline timer available Jan 15 05:52:37.826386 kernel: CPU topo: Max. logical packages: 1 Jan 15 05:52:37.826399 kernel: CPU topo: Max. logical dies: 1 Jan 15 05:52:37.826504 kernel: CPU topo: Max. dies per package: 1 Jan 15 05:52:37.826517 kernel: CPU topo: Max. threads per core: 1 Jan 15 05:52:37.826530 kernel: CPU topo: Num. cores per package: 4 Jan 15 05:52:37.826542 kernel: CPU topo: Num. threads per package: 4 Jan 15 05:52:37.826553 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 05:52:37.826569 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 05:52:37.826582 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 05:52:37.826594 kernel: kvm-guest: setup PV sched yield Jan 15 05:52:37.826606 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 15 05:52:37.826618 kernel: Booting paravirtualized kernel on KVM Jan 15 05:52:37.826631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 05:52:37.826643 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 05:52:37.826659 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 05:52:37.826672 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 05:52:37.826684 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 05:52:37.826696 kernel: kvm-guest: PV spinlocks enabled Jan 15 05:52:37.826709 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 05:52:37.826812 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:52:37.826826 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 05:52:37.826844 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 05:52:37.826856 kernel: Fallback order for Node 0: 0 Jan 15 05:52:37.826869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 15 05:52:37.826881 kernel: Policy zone: DMA32 Jan 15 05:52:37.826893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 05:52:37.826905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 05:52:37.826918 kernel: ftrace: allocating 40128 entries in 157 pages Jan 15 05:52:37.826934 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 05:52:37.826945 kernel: Dynamic Preempt: voluntary Jan 15 05:52:37.826957 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 05:52:37.826971 kernel: rcu: RCU event tracing is enabled. Jan 15 05:52:37.826983 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 05:52:37.826995 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 05:52:37.827008 kernel: Rude variant of Tasks RCU enabled. Jan 15 05:52:37.827020 kernel: Tracing variant of Tasks RCU enabled. Jan 15 05:52:37.827035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 05:52:37.827048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 05:52:37.827141 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:52:37.827155 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:52:37.827329 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:52:37.827344 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 05:52:37.827357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 05:52:37.827374 kernel: Console: colour dummy device 80x25 Jan 15 05:52:37.827386 kernel: printk: legacy console [ttyS0] enabled Jan 15 05:52:37.827399 kernel: ACPI: Core revision 20240827 Jan 15 05:52:37.827507 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 05:52:37.827520 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 05:52:37.827533 kernel: x2apic enabled Jan 15 05:52:37.827545 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 05:52:37.827562 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 05:52:37.827574 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 05:52:37.827587 kernel: kvm-guest: setup PV IPIs Jan 15 05:52:37.827599 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 05:52:37.827611 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:52:37.827624 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 15 05:52:37.827641 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 05:52:37.827657 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 05:52:37.827670 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 05:52:37.827682 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 05:52:37.827694 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 05:52:37.827707 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 05:52:37.827719 kernel: Speculative Store Bypass: Vulnerable Jan 15 05:52:37.827731 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 05:52:37.827748 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 05:52:37.827844 kernel: active return thunk: srso_alias_return_thunk Jan 15 05:52:37.827858 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 05:52:37.827871 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 05:52:37.827883 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 05:52:37.827895 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 05:52:37.827907 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 05:52:37.827924 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 05:52:37.827936 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 05:52:37.827948 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 05:52:37.827961 kernel: Freeing SMP alternatives memory: 32K Jan 15 05:52:37.827973 kernel: pid_max: default: 32768 minimum: 301 Jan 15 05:52:37.827985 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 05:52:37.827997 kernel: landlock: Up and running. Jan 15 05:52:37.828012 kernel: SELinux: Initializing. Jan 15 05:52:37.828025 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:52:37.828037 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:52:37.828050 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 05:52:37.828062 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 05:52:37.828075 kernel: signal: max sigframe size: 1776 Jan 15 05:52:37.828087 kernel: rcu: Hierarchical SRCU implementation. Jan 15 05:52:37.828103 kernel: rcu: Max phase no-delay instances is 400. Jan 15 05:52:37.828115 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 05:52:37.828127 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 05:52:37.828139 kernel: smp: Bringing up secondary CPUs ... Jan 15 05:52:37.828152 kernel: smpboot: x86: Booting SMP configuration: Jan 15 05:52:37.828164 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 05:52:37.828341 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 05:52:37.828359 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 15 05:52:37.828372 kernel: Memory: 2439044K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120816K reserved, 0K cma-reserved) Jan 15 05:52:37.828385 kernel: devtmpfs: initialized Jan 15 05:52:37.828397 kernel: x86/mm: Memory block size: 128MB Jan 15 05:52:37.828503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 15 05:52:37.828519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 15 05:52:37.828531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 15 05:52:37.828548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 15 05:52:37.828561 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 15 05:52:37.828573 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 15 05:52:37.828586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 05:52:37.828598 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 05:52:37.828610 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 05:52:37.828622 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 05:52:37.828638 kernel: audit: initializing netlink subsys (disabled) Jan 15 05:52:37.828651 kernel: audit: type=2000 audit(1768456342.194:1): state=initialized audit_enabled=0 res=1 Jan 15 05:52:37.828663 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 05:52:37.828675 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 05:52:37.828687 kernel: cpuidle: using governor menu Jan 15 05:52:37.828699 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 05:52:37.828712 kernel: dca service started, version 1.12.1 Jan 15 05:52:37.828728 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 15 05:52:37.828740 kernel: PCI: Using configuration type 1 for base access Jan 15 05:52:37.828752 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 05:52:37.828765 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 05:52:37.828777 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 05:52:37.828789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 05:52:37.828802 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 05:52:37.828817 kernel: ACPI: Added _OSI(Module Device) Jan 15 05:52:37.828830 kernel: ACPI: Added _OSI(Processor Device) Jan 15 05:52:37.828842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 05:52:37.828854 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 05:52:37.828866 kernel: ACPI: Interpreter enabled Jan 15 05:52:37.828878 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 05:52:37.828890 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 05:52:37.828906 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 05:52:37.828918 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 05:52:37.828931 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 05:52:37.828943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 05:52:37.829589 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 05:52:37.829891 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 05:52:37.830364 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 05:52:37.830385 kernel: PCI host bridge to bus 0000:00 Jan 15 05:52:37.830757 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 05:52:37.831011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 05:52:37.831539 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 05:52:37.831791 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 15 05:52:37.832045 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 15 05:52:37.832569 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 15 05:52:37.832820 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 05:52:37.833109 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 05:52:37.833729 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 05:52:37.834011 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 15 05:52:37.834548 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 15 05:52:37.834812 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 15 05:52:37.835071 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 05:52:37.835609 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Jan 15 05:52:37.835890 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 05:52:37.836161 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 15 05:52:37.836706 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 15 05:52:37.836970 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 15 05:52:37.837532 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 05:52:37.837807 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 15 05:52:37.838079 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 15 05:52:37.838635 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 15 05:52:37.838916 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 05:52:37.839357 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 15 05:52:37.839730 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 15 05:52:37.839995 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 15 05:52:37.840544 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 15 05:52:37.840822 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 05:52:37.841083 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 05:52:37.841623 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Jan 15 05:52:37.841900 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 05:52:37.842160 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 15 05:52:37.854762 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 15 05:52:37.854991 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 05:52:37.855370 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 15 05:52:37.855390 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 05:52:37.855405 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 05:52:37.855515 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 05:52:37.855530 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 05:52:37.855539 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 05:52:37.855547 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 05:52:37.855556 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 05:52:37.855564 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 05:52:37.855573 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 05:52:37.855582 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 05:52:37.855593 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 05:52:37.855601 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 05:52:37.855610 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 05:52:37.855619 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 05:52:37.855627 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 05:52:37.855636 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 05:52:37.855644 kernel: iommu: Default domain type: Translated Jan 15 05:52:37.855655 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 05:52:37.855664 kernel: efivars: Registered efivars operations Jan 15 05:52:37.855672 kernel: PCI: Using ACPI for IRQ routing Jan 15 05:52:37.855681 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 05:52:37.855690 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 15 05:52:37.855698 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 15 05:52:37.855706 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 15 05:52:37.855717 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 15 05:52:37.855725 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 15 05:52:37.855733 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 15 05:52:37.855742 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 15 05:52:37.855750 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 15 05:52:37.855975 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 05:52:37.856350 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 05:52:37.856666 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 05:52:37.856679 kernel: vgaarb: loaded Jan 15 05:52:37.856688 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 05:52:37.856696 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 05:52:37.856705 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 05:52:37.856713 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 05:52:37.856721 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 05:52:37.856733 kernel: pnp: PnP ACPI init Jan 15 05:52:37.856960 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 15 05:52:37.856972 kernel: pnp: PnP ACPI: found 6 devices Jan 15 05:52:37.856981 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 05:52:37.856989 kernel: NET: Registered PF_INET protocol family Jan 15 05:52:37.856997 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 05:52:37.857006 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 05:52:37.857033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 05:52:37.857044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 05:52:37.857052 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 05:52:37.857060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 05:52:37.857069 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:52:37.857080 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:52:37.857091 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 05:52:37.857099 kernel: NET: Registered PF_XDP protocol family Jan 15 05:52:37.857576 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 15 05:52:37.857788 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 15 05:52:37.858098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 05:52:37.858578 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 05:52:37.858782 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 05:52:37.858981 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 15 05:52:37.859365 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 15 05:52:37.859679 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 15 05:52:37.859694 kernel: PCI: CLS 0 bytes, default 64 Jan 15 05:52:37.859703 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:52:37.859712 kernel: Initialise system trusted keyrings Jan 15 05:52:37.859720 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 05:52:37.859734 kernel: Key type asymmetric registered Jan 15 05:52:37.859742 kernel: Asymmetric key parser 'x509' registered Jan 15 05:52:37.859751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 05:52:37.859759 kernel: io scheduler mq-deadline registered Jan 15 05:52:37.859767 kernel: io scheduler kyber registered Jan 15 05:52:37.859776 kernel: io scheduler bfq registered Jan 15 05:52:37.859784 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 05:52:37.859796 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 05:52:37.859808 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 05:52:37.859816 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 05:52:37.859825 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 05:52:37.859836 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 05:52:37.859844 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 05:52:37.859852 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 05:52:37.859861 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 05:52:37.860075 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 05:52:37.860088 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 05:52:37.860542 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 05:52:37.860785 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T05:52:32 UTC (1768456352) Jan 15 05:52:37.860987 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 15 05:52:37.860999 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 05:52:37.861007 kernel: efifb: probing for efifb Jan 15 05:52:37.861016 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 15 05:52:37.861024 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 15 05:52:37.861033 kernel: efifb: scrolling: redraw Jan 15 05:52:37.861045 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 05:52:37.861054 kernel: Console: switching to colour frame buffer device 160x50 Jan 15 05:52:37.861062 kernel: fb0: EFI VGA frame buffer device Jan 15 05:52:37.861070 kernel: pstore: Using crash dump compression: deflate Jan 15 05:52:37.861079 kernel: pstore: Registered efi_pstore as persistent store backend Jan 15 05:52:37.861087 kernel: NET: Registered PF_INET6 protocol family Jan 15 05:52:37.861096 kernel: Segment Routing with IPv6 Jan 15 05:52:37.861107 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 05:52:37.861115 kernel: NET: Registered PF_PACKET protocol family Jan 15 05:52:37.861123 kernel: Key type dns_resolver registered Jan 15 05:52:37.861132 kernel: IPI shorthand broadcast: enabled Jan 15 05:52:37.861143 kernel: sched_clock: Marking stable (10702083367, 1510698526)->(12954677351, -741895458) Jan 15 05:52:37.861151 kernel: registered taskstats version 1 Jan 15 05:52:37.861159 kernel: Loading compiled-in X.509 certificates Jan 15 05:52:37.861330 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a89cae614c389520e311ccbffccefdc95226b716' Jan 15 05:52:37.861339 kernel: Demotion targets for Node 0: null Jan 15 05:52:37.861348 kernel: Key type .fscrypt registered Jan 15 05:52:37.861356 kernel: Key type fscrypt-provisioning registered Jan 15 05:52:37.861365 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 05:52:37.861373 kernel: ima: Allocated hash algorithm: sha1 Jan 15 05:52:37.861381 kernel: ima: No architecture policies found Jan 15 05:52:37.861393 kernel: clk: Disabling unused clocks Jan 15 05:52:37.861402 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 15 05:52:37.861496 kernel: Write protecting the kernel read-only data: 47104k Jan 15 05:52:37.861505 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 15 05:52:37.861514 kernel: Run /init as init process Jan 15 05:52:37.861522 kernel: with arguments: Jan 15 05:52:37.861531 kernel: /init Jan 15 05:52:37.861542 kernel: with environment: Jan 15 05:52:37.861550 kernel: HOME=/ Jan 15 05:52:37.861558 kernel: TERM=linux Jan 15 05:52:37.861567 kernel: SCSI subsystem initialized Jan 15 05:52:37.861575 kernel: libata version 3.00 loaded. Jan 15 05:52:37.861794 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 05:52:37.861806 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 05:52:37.862011 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 05:52:37.862386 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 05:52:37.862694 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 05:52:37.862935 kernel: scsi host0: ahci Jan 15 05:52:37.863390 kernel: scsi host1: ahci Jan 15 05:52:37.863735 kernel: scsi host2: ahci Jan 15 05:52:37.863968 kernel: scsi host3: ahci Jan 15 05:52:37.864364 kernel: scsi host4: ahci Jan 15 05:52:37.864706 kernel: scsi host5: ahci Jan 15 05:52:37.864723 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 15 05:52:37.864732 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 15 05:52:37.864741 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 15 05:52:37.864755 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 15 05:52:37.864764 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 15 05:52:37.864772 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 15 05:52:37.864781 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 05:52:37.864789 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 05:52:37.864797 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 05:52:37.864806 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 05:52:37.864817 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 05:52:37.864825 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 05:52:37.864833 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:52:37.864842 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 05:52:37.864852 kernel: ata3.00: applying bridge limits Jan 15 05:52:37.864868 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:52:37.864882 kernel: ata3.00: configured for UDMA/100 Jan 15 05:52:37.865529 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 05:52:37.865814 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 05:52:37.866093 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 05:52:37.866623 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 05:52:37.866648 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 05:52:37.866670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 05:52:37.866681 kernel: GPT:16515071 != 27000831 Jan 15 05:52:37.866690 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 05:52:37.866698 kernel: GPT:16515071 != 27000831 Jan 15 05:52:37.866709 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 05:52:37.866717 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 05:52:37.866990 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 05:52:37.867004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 05:52:37.867017 kernel: device-mapper: uevent: version 1.0.3 Jan 15 05:52:37.867026 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 05:52:37.867035 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 05:52:37.867043 kernel: raid6: avx2x4 gen() 22577 MB/s Jan 15 05:52:37.867051 kernel: raid6: avx2x2 gen() 28926 MB/s Jan 15 05:52:37.867059 kernel: raid6: avx2x1 gen() 18785 MB/s Jan 15 05:52:37.867067 kernel: raid6: using algorithm avx2x2 gen() 28926 MB/s Jan 15 05:52:37.867078 kernel: raid6: .... xor() 23017 MB/s, rmw enabled Jan 15 05:52:37.867086 kernel: raid6: using avx2x2 recovery algorithm Jan 15 05:52:37.867095 kernel: xor: automatically using best checksumming function avx Jan 15 05:52:37.867103 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 05:52:37.867116 kernel: BTRFS: device fsid 0b6e2cdd-9800-410c-b18c-88de6acfe8db devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (180) Jan 15 05:52:37.867131 kernel: BTRFS info (device dm-0): first mount of filesystem 0b6e2cdd-9800-410c-b18c-88de6acfe8db Jan 15 05:52:37.867146 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:52:37.867166 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 05:52:37.867356 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 05:52:37.867365 kernel: loop: module loaded Jan 15 05:52:37.867373 kernel: loop0: detected capacity change from 0 to 100536 Jan 15 05:52:37.867382 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 05:52:37.867394 systemd[1]: Successfully made /usr/ read-only. Jan 15 05:52:37.867523 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:52:37.867534 systemd[1]: Detected virtualization kvm. Jan 15 05:52:37.867543 systemd[1]: Detected architecture x86-64. Jan 15 05:52:37.867552 systemd[1]: Running in initrd. Jan 15 05:52:37.867561 systemd[1]: No hostname configured, using default hostname. Jan 15 05:52:37.867570 systemd[1]: Hostname set to . Jan 15 05:52:37.867583 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:52:37.867592 systemd[1]: Queued start job for default target initrd.target. Jan 15 05:52:37.867601 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:52:37.867610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:52:37.867619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:52:37.867629 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 05:52:37.867638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:52:37.867649 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 05:52:37.867659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 05:52:37.867668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:52:37.867676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:52:37.867686 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:52:37.867697 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:52:37.867707 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:52:37.867715 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:52:37.867724 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:52:37.867733 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:52:37.867748 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:52:37.867764 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:52:37.867783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 05:52:37.867800 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 05:52:37.867814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:52:37.867823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:52:37.867832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:52:37.867841 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:52:37.867850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 05:52:37.867863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 05:52:37.867872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:52:37.867880 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 05:52:37.867890 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 05:52:37.867899 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 05:52:37.867908 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:52:37.867916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:52:37.867928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:52:37.867975 systemd-journald[319]: Collecting audit messages is enabled. Jan 15 05:52:37.868001 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 05:52:37.868017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:52:37.868038 kernel: audit: type=1130 audit(1768456357.841:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.868053 systemd-journald[319]: Journal started Jan 15 05:52:37.868085 systemd-journald[319]: Runtime Journal (/run/log/journal/d534befce94a4ce39f6ffb2ba3ff9b0c) is 6M, max 48M, 42M free. Jan 15 05:52:37.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.922379 kernel: audit: type=1130 audit(1768456357.894:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.922565 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:52:37.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.944953 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 05:52:37.979403 kernel: audit: type=1130 audit(1768456357.942:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:37.998606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 05:52:38.039053 kernel: audit: type=1130 audit(1768456357.985:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.064389 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 05:52:38.069512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:52:38.095764 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 15 05:52:38.102513 kernel: Bridge firewalling registered Jan 15 05:52:38.098697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:52:38.139359 kernel: audit: type=1130 audit(1768456358.109:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.110985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:52:38.184379 kernel: audit: type=1130 audit(1768456358.146:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.185916 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 05:52:38.195618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 05:52:38.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.251604 kernel: audit: type=1130 audit(1768456358.189:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.256668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:52:38.270670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:52:38.300643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:52:38.307870 systemd-tmpfiles[333]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 05:52:38.359609 kernel: audit: type=1130 audit(1768456358.316:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.360071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:52:38.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.401576 kernel: audit: type=1130 audit(1768456358.378:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.407062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:52:38.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.425965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:52:38.468365 kernel: audit: type=1130 audit(1768456358.421:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.424000 audit: BPF prog-id=6 op=LOAD Jan 15 05:52:38.482761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:52:38.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.512040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 05:52:38.597689 dracut-cmdline[361]: dracut-109 Jan 15 05:52:38.599688 systemd-resolved[358]: Positive Trust Anchors: Jan 15 05:52:38.599698 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:52:38.599703 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:52:38.599737 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:52:38.675806 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 15 05:52:38.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:38.722614 dracut-cmdline[361]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:52:38.682039 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:52:38.710754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:52:39.014370 kernel: Loading iSCSI transport class v2.0-870. Jan 15 05:52:39.056691 kernel: iscsi: registered transport (tcp) Jan 15 05:52:39.113885 kernel: iscsi: registered transport (qla4xxx) Jan 15 05:52:39.113956 kernel: QLogic iSCSI HBA Driver Jan 15 05:52:39.193869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:52:39.268877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:52:39.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:39.300670 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:52:39.434594 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 05:52:39.444131 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 05:52:39.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:39.486650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 05:52:39.556836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:52:39.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:39.565000 audit: BPF prog-id=7 op=LOAD Jan 15 05:52:39.565000 audit: BPF prog-id=8 op=LOAD Jan 15 05:52:39.567162 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:52:39.654701 systemd-udevd[572]: Using default interface naming scheme 'v257'. Jan 15 05:52:39.696987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:52:39.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:39.727777 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 05:52:39.824999 dracut-pre-trigger[628]: rd.md=0: removing MD RAID activation Jan 15 05:52:39.935576 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:52:39.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:39.957855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:52:39.992394 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:52:40.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:40.004000 audit: BPF prog-id=9 op=LOAD Jan 15 05:52:40.006390 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:52:40.116152 systemd-networkd[727]: lo: Link UP Jan 15 05:52:40.116382 systemd-networkd[727]: lo: Gained carrier Jan 15 05:52:40.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:40.118350 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:52:40.129920 systemd[1]: Reached target network.target - Network. Jan 15 05:52:40.202625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:52:40.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:40.236570 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 05:52:40.383716 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 05:52:40.423374 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 05:52:40.441631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 05:52:40.460965 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 05:52:40.542890 disk-uuid[765]: Primary Header is updated. Jan 15 05:52:40.542890 disk-uuid[765]: Secondary Entries is updated. Jan 15 05:52:40.542890 disk-uuid[765]: Secondary Header is updated. Jan 15 05:52:40.546838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:52:40.617783 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 05:52:40.662930 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:52:40.662944 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:52:40.676728 systemd-networkd[727]: eth0: Link UP Jan 15 05:52:40.677153 systemd-networkd[727]: eth0: Gained carrier Jan 15 05:52:40.677377 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:52:40.755848 kernel: AES CTR mode by8 optimization enabled Jan 15 05:52:40.771737 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:52:40.789594 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 15 05:52:40.771915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:52:40.805610 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:52:40.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:40.839555 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:52:40.866625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:52:40.918559 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 05:52:40.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:40.928901 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:52:40.929135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:52:40.958946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:52:40.976763 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 05:52:41.051831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:52:41.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:41.079061 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:52:41.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:41.639076 disk-uuid[766]: Warning: The kernel is still using the old partition table. Jan 15 05:52:41.639076 disk-uuid[766]: The new table will be used at the next reboot or after you Jan 15 05:52:41.639076 disk-uuid[766]: run partprobe(8) or kpartx(8) Jan 15 05:52:41.639076 disk-uuid[766]: The operation has completed successfully. Jan 15 05:52:41.692836 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 05:52:41.693966 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 05:52:41.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:41.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:41.716657 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 05:52:41.840793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 15 05:52:41.840866 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:52:41.864888 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:52:41.903998 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:52:41.904080 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:52:41.939435 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:52:41.949122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 05:52:41.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:41.965030 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 05:52:42.287069 ignition[878]: Ignition 2.24.0 Jan 15 05:52:42.287426 ignition[878]: Stage: fetch-offline Jan 15 05:52:42.287607 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:52:42.287629 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:52:42.287747 ignition[878]: parsed url from cmdline: "" Jan 15 05:52:42.287752 ignition[878]: no config URL provided Jan 15 05:52:42.287759 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 05:52:42.287772 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 15 05:52:42.287817 ignition[878]: op(1): [started] loading QEMU firmware config module Jan 15 05:52:42.287823 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 05:52:42.414913 ignition[878]: op(1): [finished] loading QEMU firmware config module Jan 15 05:52:42.598634 systemd-networkd[727]: eth0: Gained IPv6LL Jan 15 05:52:43.630932 ignition[878]: parsing config with SHA512: 34fb1201c30fe5e3dfbcc0ec30e0ff138925031f340068386bfe813555e4346241e32cdd4267f3394e03257984fe46c980002bc70a31c8f01fa80c68c312c2c1 Jan 15 05:52:43.653019 unknown[878]: fetched base config from "system" Jan 15 05:52:43.653034 unknown[878]: fetched user config from "qemu" Jan 15 05:52:43.654935 ignition[878]: fetch-offline: fetch-offline passed Jan 15 05:52:43.655027 ignition[878]: Ignition finished successfully Jan 15 05:52:43.685619 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:52:43.747958 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 15 05:52:43.747989 kernel: audit: type=1130 audit(1768456363.702:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:43.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:43.703133 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 05:52:43.705081 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 05:52:43.863771 ignition[887]: Ignition 2.24.0 Jan 15 05:52:43.863883 ignition[887]: Stage: kargs Jan 15 05:52:43.864093 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:52:43.864113 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:52:43.909108 ignition[887]: kargs: kargs passed Jan 15 05:52:43.910036 ignition[887]: Ignition finished successfully Jan 15 05:52:43.931076 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 05:52:43.975693 kernel: audit: type=1130 audit(1768456363.939:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:43.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:43.942911 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 05:52:44.328990 ignition[894]: Ignition 2.24.0 Jan 15 05:52:44.329094 ignition[894]: Stage: disks Jan 15 05:52:44.353400 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:52:44.353459 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:52:44.382395 ignition[894]: disks: disks passed Jan 15 05:52:44.382779 ignition[894]: Ignition finished successfully Jan 15 05:52:44.400165 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 05:52:44.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:44.462663 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 05:52:44.618102 kernel: audit: type=1130 audit(1768456364.448:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:44.618307 kernel: hrtimer: interrupt took 7325684 ns Jan 15 05:52:44.607626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 05:52:44.637811 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:52:44.687049 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:52:44.748573 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:52:44.767643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 05:52:45.026414 systemd-fsck[904]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 05:52:45.062091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 05:52:45.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:45.109619 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 05:52:45.170616 kernel: audit: type=1130 audit(1768456365.100:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:45.784925 kernel: EXT4-fs (vda9): mounted filesystem a9a0585b-a83b-49e4-a2e7-8f2fc277193d r/w with ordered data mode. Quota mode: none. Jan 15 05:52:45.788058 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 05:52:45.799026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 05:52:45.821845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:52:45.877888 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 05:52:45.888838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 05:52:45.954130 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 15 05:52:45.954164 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:52:45.954369 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:52:45.888905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 05:52:46.017414 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:52:46.017450 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:52:45.888947 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:52:46.023142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:52:46.047687 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 05:52:46.068378 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 05:52:47.015738 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 05:52:47.081470 kernel: audit: type=1130 audit(1768456367.035:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.039938 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 05:52:47.115429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 05:52:47.161617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 05:52:47.185716 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:52:47.363618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 05:52:47.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.417067 kernel: audit: type=1130 audit(1768456367.387:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.421126 ignition[1010]: INFO : Ignition 2.24.0 Jan 15 05:52:47.421126 ignition[1010]: INFO : Stage: mount Jan 15 05:52:47.446110 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:52:47.446110 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:52:47.446110 ignition[1010]: INFO : mount: mount passed Jan 15 05:52:47.446110 ignition[1010]: INFO : Ignition finished successfully Jan 15 05:52:47.488791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 05:52:47.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.513454 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 05:52:47.548942 kernel: audit: type=1130 audit(1768456367.509:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:47.637165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:52:47.714485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 15 05:52:47.742840 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:52:47.742957 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:52:47.786469 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:52:47.786670 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:52:47.791651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:52:47.909050 ignition[1041]: INFO : Ignition 2.24.0 Jan 15 05:52:47.909050 ignition[1041]: INFO : Stage: files Jan 15 05:52:47.927840 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:52:47.927840 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:52:47.956714 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jan 15 05:52:47.977287 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 05:52:47.977287 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 05:52:48.026749 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 05:52:48.047620 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 05:52:48.047620 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 05:52:48.034351 unknown[1041]: wrote ssh authorized keys file for user: core Jan 15 05:52:48.086042 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 15 05:52:48.086042 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 15 05:52:48.244345 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 05:52:49.316620 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 15 05:52:49.345697 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 15 05:52:49.986846 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 05:52:57.490873 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 15 05:52:57.490873 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 05:52:57.532537 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:52:57.575850 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:52:57.575850 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 05:52:57.575850 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 05:52:57.575850 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:52:57.659126 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:52:57.659126 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 05:52:57.659126 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 05:52:57.718046 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:52:57.763581 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:52:57.763581 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 05:52:57.763581 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 05:52:57.763581 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 05:52:57.878823 kernel: audit: type=1130 audit(1768456377.798:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:57.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:57.878933 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:52:57.878933 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:52:57.878933 ignition[1041]: INFO : files: files passed Jan 15 05:52:57.878933 ignition[1041]: INFO : Ignition finished successfully Jan 15 05:52:57.771851 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 05:52:57.802527 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 05:52:57.930065 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 05:52:57.997974 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 05:52:58.031994 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:52:58.031994 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:52:58.173414 kernel: audit: type=1130 audit(1768456378.079:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.173484 kernel: audit: type=1131 audit(1768456378.080:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.173765 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:52:58.039825 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 05:52:58.050423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 05:52:58.226396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:52:58.299913 kernel: audit: type=1130 audit(1768456378.235:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.236440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 05:52:58.277394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 05:52:58.472048 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 05:52:58.640776 kernel: audit: type=1130 audit(1768456378.481:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.641933 kernel: audit: type=1131 audit(1768456378.481:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.472574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 05:52:58.482435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 05:52:58.570854 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 05:52:58.589036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 05:52:58.591985 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 05:52:58.932928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:52:58.993821 kernel: audit: type=1130 audit(1768456378.950:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:58.954069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 05:52:59.049119 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:52:59.049827 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:52:59.070345 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:52:59.090714 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 05:52:59.112491 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 05:52:59.185907 kernel: audit: type=1131 audit(1768456379.142:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.112771 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:52:59.177884 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 05:52:59.194077 systemd[1]: Stopped target basic.target - Basic System. Jan 15 05:52:59.249925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 05:52:59.274789 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:52:59.306032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 05:52:59.343542 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:52:59.367486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 05:52:59.396056 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:52:59.418916 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 05:52:59.441891 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 05:52:59.468044 systemd[1]: Stopped target swap.target - Swaps. Jan 15 05:52:59.500412 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 05:52:59.501068 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:52:59.555848 kernel: audit: type=1131 audit(1768456379.517:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.551157 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:52:59.564778 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:52:59.584367 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 05:52:59.584983 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:52:59.673410 kernel: audit: type=1131 audit(1768456379.632:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.608166 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 05:52:59.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.608869 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 05:52:59.667123 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 05:52:59.667824 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:52:59.684128 systemd[1]: Stopped target paths.target - Path Units. Jan 15 05:52:59.705795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 05:52:59.709822 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:52:59.724919 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 05:52:59.749491 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 05:52:59.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.772576 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 05:52:59.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.772793 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:52:59.790503 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 05:52:59.791012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:52:59.813747 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 05:52:59.813842 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:52:59.849756 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 05:52:59.849905 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:53:00.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.869021 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 05:53:00.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.869396 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 05:53:00.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:52:59.897899 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 05:52:59.978447 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 05:53:00.000438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 05:53:00.001489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:53:00.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.172112 ignition[1098]: INFO : Ignition 2.24.0 Jan 15 05:53:00.172112 ignition[1098]: INFO : Stage: umount Jan 15 05:53:00.172112 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:53:00.172112 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:53:00.172112 ignition[1098]: INFO : umount: umount passed Jan 15 05:53:00.172112 ignition[1098]: INFO : Ignition finished successfully Jan 15 05:53:00.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.010539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 05:53:00.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.010967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:53:00.035804 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 05:53:00.036146 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:53:00.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.108147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 05:53:00.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.108743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 05:53:00.141960 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 05:53:00.142382 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 05:53:00.172726 systemd[1]: Stopped target network.target - Network. Jan 15 05:53:00.183717 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 05:53:00.183799 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 05:53:00.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.206945 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 05:53:00.207029 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 05:53:00.228782 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 05:53:00.228862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 05:53:00.494000 audit: BPF prog-id=6 op=UNLOAD Jan 15 05:53:00.253051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 05:53:00.253121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 05:53:00.283765 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 05:53:00.303735 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 05:53:00.325889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 05:53:00.328002 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 05:53:00.329577 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 05:53:00.349525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 05:53:00.349787 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 05:53:00.425471 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 05:53:00.425978 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 05:53:00.615129 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 05:53:00.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.615770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 05:53:00.640000 audit: BPF prog-id=9 op=UNLOAD Jan 15 05:53:00.650872 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 05:53:00.659429 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 05:53:00.659531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:53:00.685914 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 05:53:00.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.701419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 05:53:00.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.701517 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:53:00.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.722729 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 05:53:00.722842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:53:00.748901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 05:53:00.749051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 05:53:00.776341 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:53:00.869046 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 05:53:00.869895 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:53:00.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.905612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 05:53:00.905935 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 05:53:00.927063 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 05:53:00.927130 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:53:00.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.948487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 05:53:00.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.948563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:53:01.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:00.976432 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 05:53:00.976512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 05:53:00.995799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 05:53:00.995876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:53:01.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.037828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 05:53:01.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.052749 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 05:53:01.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.052882 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:53:01.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.074946 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 05:53:01.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.075081 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:53:01.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.101795 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 05:53:01.101910 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 05:53:01.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:01.122376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 05:53:01.122449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:53:01.153578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:53:01.153808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:53:01.182571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 05:53:01.182975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 05:53:01.231889 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 05:53:01.232407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 05:53:01.250098 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 05:53:01.267804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 05:53:01.300831 systemd[1]: Switching root. Jan 15 05:53:01.430127 systemd-journald[319]: Journal stopped Jan 15 05:53:05.966563 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 15 05:53:05.966794 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 05:53:05.966822 kernel: SELinux: policy capability open_perms=1 Jan 15 05:53:05.966843 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 05:53:05.966863 kernel: SELinux: policy capability always_check_network=0 Jan 15 05:53:05.966887 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 05:53:05.966907 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 05:53:05.966925 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 05:53:05.966949 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 05:53:05.966975 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 05:53:05.966996 systemd[1]: Successfully loaded SELinux policy in 185.555ms. Jan 15 05:53:05.967035 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.398ms. Jan 15 05:53:05.967061 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:53:05.967090 systemd[1]: Detected virtualization kvm. Jan 15 05:53:05.967112 systemd[1]: Detected architecture x86-64. Jan 15 05:53:05.967137 systemd[1]: Detected first boot. Jan 15 05:53:05.967157 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:53:05.967420 zram_generator::config[1142]: No configuration found. Jan 15 05:53:05.967448 kernel: Guest personality initialized and is inactive Jan 15 05:53:05.967470 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 05:53:05.967497 kernel: Initialized host personality Jan 15 05:53:05.967517 kernel: NET: Registered PF_VSOCK protocol family Jan 15 05:53:05.967547 systemd[1]: Populated /etc with preset unit settings. Jan 15 05:53:05.967570 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 15 05:53:05.967590 kernel: audit: type=1334 audit(1768456383.744:90): prog-id=12 op=LOAD Jan 15 05:53:05.967610 kernel: audit: type=1334 audit(1768456383.744:91): prog-id=3 op=UNLOAD Jan 15 05:53:05.967630 kernel: audit: type=1334 audit(1768456383.745:92): prog-id=13 op=LOAD Jan 15 05:53:05.967651 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 05:53:05.967782 kernel: audit: type=1334 audit(1768456383.745:93): prog-id=14 op=LOAD Jan 15 05:53:05.967812 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 05:53:05.967833 kernel: audit: type=1334 audit(1768456383.745:94): prog-id=4 op=UNLOAD Jan 15 05:53:05.967854 kernel: audit: type=1334 audit(1768456383.745:95): prog-id=5 op=UNLOAD Jan 15 05:53:05.967875 kernel: audit: type=1131 audit(1768456383.750:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:05.967896 kernel: audit: type=1130 audit(1768456383.878:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:05.967916 kernel: audit: type=1131 audit(1768456383.878:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:05.967942 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 05:53:05.967964 kernel: audit: type=1334 audit(1768456383.946:99): prog-id=12 op=UNLOAD Jan 15 05:53:05.968005 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 05:53:05.968029 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 05:53:05.968051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 05:53:05.968073 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 05:53:05.968095 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 05:53:05.968117 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 05:53:05.968142 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 05:53:05.968381 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 05:53:05.968412 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:53:05.968435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:53:05.968457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 05:53:05.968480 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 05:53:05.968501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 05:53:05.968522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:53:05.968544 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 05:53:05.968570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:53:05.968590 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:53:05.968611 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 05:53:05.968633 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 05:53:05.968776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 05:53:05.968805 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 05:53:05.968826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:53:05.968850 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:53:05.968872 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 05:53:05.968892 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:53:05.968913 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:53:05.968936 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 05:53:05.968961 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 05:53:05.968984 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 05:53:05.969011 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:53:05.969035 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 05:53:05.969056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:53:05.969075 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 05:53:05.969098 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 05:53:05.969120 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:53:05.969143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:53:05.969378 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 05:53:05.969407 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 05:53:05.969428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 05:53:05.969449 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 05:53:05.969471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:05.969493 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 05:53:05.969515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 05:53:05.969545 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 05:53:05.969568 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 05:53:05.969590 systemd[1]: Reached target machines.target - Containers. Jan 15 05:53:05.969612 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 05:53:05.969631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:53:05.969653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:53:05.969787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 05:53:05.969819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:53:05.969840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:53:05.969862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:53:05.969883 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 05:53:05.969902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:53:05.969921 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 05:53:05.969945 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 05:53:05.969967 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 05:53:05.969989 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 05:53:05.970012 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 05:53:05.970035 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:53:05.970056 kernel: ACPI: bus type drm_connector registered Jan 15 05:53:05.970077 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:53:05.970102 kernel: fuse: init (API version 7.41) Jan 15 05:53:05.970124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:53:05.970146 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:53:05.970373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 05:53:05.970412 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 05:53:05.970435 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:53:05.970498 systemd-journald[1229]: Collecting audit messages is enabled. Jan 15 05:53:05.970543 systemd-journald[1229]: Journal started Jan 15 05:53:05.970578 systemd-journald[1229]: Runtime Journal (/run/log/journal/d534befce94a4ce39f6ffb2ba3ff9b0c) is 6M, max 48M, 42M free. Jan 15 05:53:06.010430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:06.010514 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 05:53:04.820000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 05:53:05.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:05.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:05.682000 audit: BPF prog-id=14 op=UNLOAD Jan 15 05:53:05.682000 audit: BPF prog-id=13 op=UNLOAD Jan 15 05:53:06.025037 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:53:05.708000 audit: BPF prog-id=15 op=LOAD Jan 15 05:53:05.710000 audit: BPF prog-id=16 op=LOAD Jan 15 05:53:05.711000 audit: BPF prog-id=17 op=LOAD Jan 15 05:53:05.961000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 05:53:05.961000 audit[1229]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffe089a290 a2=4000 a3=0 items=0 ppid=1 pid=1229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:05.961000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 05:53:03.718554 systemd[1]: Queued start job for default target multi-user.target. Jan 15 05:53:03.746123 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 05:53:03.749088 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 05:53:03.750590 systemd[1]: systemd-journald.service: Consumed 5.498s CPU time. Jan 15 05:53:06.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.053896 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 05:53:06.070019 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 05:53:06.083468 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 05:53:06.098058 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 05:53:06.114656 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 05:53:06.127165 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 05:53:06.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.146134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:53:06.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.162039 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 05:53:06.162872 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 05:53:06.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.178582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:53:06.179984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:53:06.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.196807 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:53:06.197506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:53:06.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.212946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:53:06.213653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:53:06.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.230471 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 05:53:06.231068 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 05:53:06.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.246139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:53:06.246946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:53:06.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.262962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:53:06.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.281400 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:53:06.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.301607 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 05:53:06.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.320006 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 05:53:06.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.338026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:53:06.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.385527 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:53:06.400395 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 05:53:06.418429 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 05:53:06.450085 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 05:53:06.462916 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 05:53:06.463079 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:53:06.477977 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 05:53:06.494497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:53:06.494897 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:53:06.500112 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 05:53:06.516515 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 05:53:06.527472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:53:06.530003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 05:53:06.543008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:53:06.556912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:53:06.576914 systemd-journald[1229]: Time spent on flushing to /var/log/journal/d534befce94a4ce39f6ffb2ba3ff9b0c is 65.048ms for 1203 entries. Jan 15 05:53:06.576914 systemd-journald[1229]: System Journal (/var/log/journal/d534befce94a4ce39f6ffb2ba3ff9b0c) is 8M, max 163.5M, 155.5M free. Jan 15 05:53:06.666368 systemd-journald[1229]: Received client request to flush runtime journal. Jan 15 05:53:06.576591 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 05:53:06.613664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 05:53:06.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.628856 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 05:53:06.641346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 05:53:06.655647 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 05:53:06.673448 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 05:53:06.687407 kernel: loop1: detected capacity change from 0 to 219144 Jan 15 05:53:06.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:06.722636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:53:07.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:07.405362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 05:53:07.450604 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 05:53:07.466814 kernel: loop2: detected capacity change from 0 to 50784 Jan 15 05:53:07.576466 kernel: loop3: detected capacity change from 0 to 111560 Jan 15 05:53:07.581440 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 15 05:53:07.581467 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 15 05:53:07.596120 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 05:53:07.599610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 05:53:07.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:07.621633 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 05:53:07.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:07.659059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 05:53:07.698387 kernel: loop4: detected capacity change from 0 to 219144 Jan 15 05:53:07.778631 kernel: loop5: detected capacity change from 0 to 50784 Jan 15 05:53:07.802648 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 05:53:07.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:07.818000 audit: BPF prog-id=18 op=LOAD Jan 15 05:53:07.818000 audit: BPF prog-id=19 op=LOAD Jan 15 05:53:07.818000 audit: BPF prog-id=20 op=LOAD Jan 15 05:53:07.821605 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 05:53:07.843000 audit: BPF prog-id=21 op=LOAD Jan 15 05:53:07.848568 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:53:07.866478 kernel: loop6: detected capacity change from 0 to 111560 Jan 15 05:53:07.867984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:53:07.888000 audit: BPF prog-id=22 op=LOAD Jan 15 05:53:07.889000 audit: BPF prog-id=23 op=LOAD Jan 15 05:53:07.890000 audit: BPF prog-id=24 op=LOAD Jan 15 05:53:07.893512 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 05:53:07.915000 audit: BPF prog-id=25 op=LOAD Jan 15 05:53:07.915000 audit: BPF prog-id=26 op=LOAD Jan 15 05:53:07.915000 audit: BPF prog-id=27 op=LOAD Jan 15 05:53:07.977108 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 05:53:08.033899 (sd-merge)[1286]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 05:53:08.153614 (sd-merge)[1286]: Merged extensions into '/usr'. Jan 15 05:53:08.556067 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 05:53:08.556090 systemd[1]: Reloading... Jan 15 05:53:09.258418 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 15 05:53:09.258439 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 15 05:53:09.311563 zram_generator::config[1327]: No configuration found. Jan 15 05:53:09.321075 systemd-nsresourced[1291]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 05:53:09.494975 systemd-oomd[1288]: No swap; memory pressure usage will be degraded Jan 15 05:53:10.108117 systemd[1]: Reloading finished in 1550 ms. Jan 15 05:53:10.152624 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 05:53:10.194036 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 15 05:53:10.194118 kernel: audit: type=1130 audit(1768456390.164:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.166131 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 05:53:10.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.207089 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 05:53:10.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.259533 kernel: audit: type=1130 audit(1768456390.205:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.259697 kernel: audit: type=1130 audit(1768456390.258:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.260526 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 05:53:10.297670 kernel: audit: type=1130 audit(1768456390.296:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.326812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:53:10.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.375602 kernel: audit: type=1130 audit(1768456390.344:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:10.395848 systemd[1]: Starting ensure-sysext.service... Jan 15 05:53:10.409506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:53:10.442372 kernel: audit: type=1334 audit(1768456390.429:150): prog-id=28 op=LOAD Jan 15 05:53:10.429000 audit: BPF prog-id=28 op=LOAD Jan 15 05:53:10.429000 audit: BPF prog-id=15 op=UNLOAD Jan 15 05:53:10.430000 audit: BPF prog-id=29 op=LOAD Jan 15 05:53:10.430000 audit: BPF prog-id=30 op=LOAD Jan 15 05:53:10.430000 audit: BPF prog-id=16 op=UNLOAD Jan 15 05:53:10.459349 kernel: audit: type=1334 audit(1768456390.429:151): prog-id=15 op=UNLOAD Jan 15 05:53:10.459391 kernel: audit: type=1334 audit(1768456390.430:152): prog-id=29 op=LOAD Jan 15 05:53:10.459428 kernel: audit: type=1334 audit(1768456390.430:153): prog-id=30 op=LOAD Jan 15 05:53:10.459447 kernel: audit: type=1334 audit(1768456390.430:154): prog-id=16 op=UNLOAD Jan 15 05:53:10.430000 audit: BPF prog-id=17 op=UNLOAD Jan 15 05:53:10.432000 audit: BPF prog-id=31 op=LOAD Jan 15 05:53:10.432000 audit: BPF prog-id=18 op=UNLOAD Jan 15 05:53:10.432000 audit: BPF prog-id=32 op=LOAD Jan 15 05:53:10.432000 audit: BPF prog-id=33 op=LOAD Jan 15 05:53:10.432000 audit: BPF prog-id=19 op=UNLOAD Jan 15 05:53:10.432000 audit: BPF prog-id=20 op=UNLOAD Jan 15 05:53:10.436000 audit: BPF prog-id=34 op=LOAD Jan 15 05:53:10.436000 audit: BPF prog-id=21 op=UNLOAD Jan 15 05:53:10.441000 audit: BPF prog-id=35 op=LOAD Jan 15 05:53:10.441000 audit: BPF prog-id=25 op=UNLOAD Jan 15 05:53:10.441000 audit: BPF prog-id=36 op=LOAD Jan 15 05:53:10.441000 audit: BPF prog-id=37 op=LOAD Jan 15 05:53:10.441000 audit: BPF prog-id=26 op=UNLOAD Jan 15 05:53:10.441000 audit: BPF prog-id=27 op=UNLOAD Jan 15 05:53:10.442000 audit: BPF prog-id=38 op=LOAD Jan 15 05:53:10.442000 audit: BPF prog-id=22 op=UNLOAD Jan 15 05:53:10.443000 audit: BPF prog-id=39 op=LOAD Jan 15 05:53:10.443000 audit: BPF prog-id=40 op=LOAD Jan 15 05:53:10.443000 audit: BPF prog-id=23 op=UNLOAD Jan 15 05:53:10.443000 audit: BPF prog-id=24 op=UNLOAD Jan 15 05:53:10.519648 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 15 05:53:10.520058 systemd[1]: Reloading... Jan 15 05:53:11.006469 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 05:53:11.011562 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 05:53:11.012128 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 05:53:11.018381 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 15 05:53:11.018508 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 15 05:53:11.060619 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:53:11.060639 systemd-tmpfiles[1369]: Skipping /boot Jan 15 05:53:11.079460 systemd-resolved[1289]: Positive Trust Anchors: Jan 15 05:53:11.097417 zram_generator::config[1402]: No configuration found. Jan 15 05:53:11.079477 systemd-resolved[1289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:53:11.079485 systemd-resolved[1289]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:53:11.079528 systemd-resolved[1289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:53:11.117868 systemd-resolved[1289]: Defaulting to hostname 'linux'. Jan 15 05:53:11.166892 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:53:11.166916 systemd-tmpfiles[1369]: Skipping /boot Jan 15 05:53:11.563355 systemd[1]: Reloading finished in 1042 ms. Jan 15 05:53:11.608559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:53:11.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:11.622154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 05:53:11.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:11.641000 audit: BPF prog-id=41 op=LOAD Jan 15 05:53:11.641000 audit: BPF prog-id=28 op=UNLOAD Jan 15 05:53:11.641000 audit: BPF prog-id=42 op=LOAD Jan 15 05:53:11.641000 audit: BPF prog-id=43 op=LOAD Jan 15 05:53:11.641000 audit: BPF prog-id=29 op=UNLOAD Jan 15 05:53:11.641000 audit: BPF prog-id=30 op=UNLOAD Jan 15 05:53:11.643000 audit: BPF prog-id=44 op=LOAD Jan 15 05:53:11.643000 audit: BPF prog-id=35 op=UNLOAD Jan 15 05:53:11.644000 audit: BPF prog-id=45 op=LOAD Jan 15 05:53:11.644000 audit: BPF prog-id=46 op=LOAD Jan 15 05:53:11.644000 audit: BPF prog-id=36 op=UNLOAD Jan 15 05:53:11.644000 audit: BPF prog-id=37 op=UNLOAD Jan 15 05:53:11.647000 audit: BPF prog-id=47 op=LOAD Jan 15 05:53:11.647000 audit: BPF prog-id=38 op=UNLOAD Jan 15 05:53:11.647000 audit: BPF prog-id=48 op=LOAD Jan 15 05:53:11.647000 audit: BPF prog-id=49 op=LOAD Jan 15 05:53:11.647000 audit: BPF prog-id=39 op=UNLOAD Jan 15 05:53:11.647000 audit: BPF prog-id=40 op=UNLOAD Jan 15 05:53:11.650000 audit: BPF prog-id=50 op=LOAD Jan 15 05:53:11.650000 audit: BPF prog-id=34 op=UNLOAD Jan 15 05:53:11.652000 audit: BPF prog-id=51 op=LOAD Jan 15 05:53:11.652000 audit: BPF prog-id=31 op=UNLOAD Jan 15 05:53:11.652000 audit: BPF prog-id=52 op=LOAD Jan 15 05:53:11.652000 audit: BPF prog-id=53 op=LOAD Jan 15 05:53:11.652000 audit: BPF prog-id=32 op=UNLOAD Jan 15 05:53:11.652000 audit: BPF prog-id=33 op=UNLOAD Jan 15 05:53:11.663120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:53:11.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:11.707947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:53:11.726565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:53:11.742388 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 05:53:11.775566 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 05:53:11.794711 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 05:53:11.809000 audit: BPF prog-id=8 op=UNLOAD Jan 15 05:53:11.809000 audit: BPF prog-id=7 op=UNLOAD Jan 15 05:53:11.812000 audit: BPF prog-id=54 op=LOAD Jan 15 05:53:11.816000 audit: BPF prog-id=55 op=LOAD Jan 15 05:53:11.820153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:53:11.839569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 05:53:11.858073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.858575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:53:11.861969 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:53:11.880379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:53:11.905460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:53:11.918841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:53:11.919473 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:53:11.919835 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:53:11.919985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.932609 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.932975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:53:11.933551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:53:11.933910 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:53:11.934041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:53:11.934153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.948003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.948573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:53:11.957962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:53:11.960000 audit[1453]: SYSTEM_BOOT pid=1453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 15 05:53:11.971074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:53:11.971647 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:53:11.971914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:53:11.972088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:53:11.992659 systemd[1]: Finished ensure-sysext.service. Jan 15 05:53:12.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.006592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:53:12.007380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:53:12.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.026645 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:53:12.027079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:53:12.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.041097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:53:12.042160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:53:12.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.068055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 05:53:12.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.098399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:53:12.098871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:53:12.111030 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Jan 15 05:53:12.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.117493 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 05:53:12.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:12.151931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:53:12.152033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:53:12.154000 audit: BPF prog-id=56 op=LOAD Jan 15 05:53:12.159531 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 05:53:12.185000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 05:53:12.185000 audit[1479]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed97d2200 a2=420 a3=0 items=0 ppid=1442 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:12.185000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:53:12.187413 augenrules[1479]: No rules Jan 15 05:53:12.190146 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:53:12.195099 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:53:13.267070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:53:13.307628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:53:13.386537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 05:53:13.402130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 05:53:13.516627 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 05:53:14.112624 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 05:53:14.113507 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 05:53:15.013514 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 15 05:53:15.072382 kernel: ACPI: button: Power Button [PWRF] Jan 15 05:53:15.100684 systemd-networkd[1494]: lo: Link UP Jan 15 05:53:15.100701 systemd-networkd[1494]: lo: Gained carrier Jan 15 05:53:15.112553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:53:15.172540 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 05:53:15.156726 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:53:15.156732 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:53:15.157422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 05:53:15.181146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:53:15.187383 systemd-networkd[1494]: eth0: Link UP Jan 15 05:53:15.189586 systemd-networkd[1494]: eth0: Gained carrier Jan 15 05:53:15.189725 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:53:15.202029 systemd[1]: Reached target network.target - Network. Jan 15 05:53:15.211131 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:53:15.217615 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 05:53:15.237623 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 05:53:15.254532 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:53:15.256455 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Jan 15 05:53:15.844427 systemd-resolved[1289]: Clock change detected. Flushing caches. Jan 15 05:53:15.845583 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 05:53:15.845726 systemd-timesyncd[1477]: Initial clock synchronization to Thu 2026-01-15 05:53:15.841497 UTC. Jan 15 05:53:15.912131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 05:53:16.441520 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 15 05:53:16.452743 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 05:53:16.495099 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 05:53:16.497622 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 05:53:16.812392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:53:17.149954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:53:17.152204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:53:17.198658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:53:17.488532 systemd-networkd[1494]: eth0: Gained IPv6LL Jan 15 05:53:17.523094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 05:53:17.542793 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 05:53:17.824041 ldconfig[1444]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 05:53:17.871802 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 05:53:17.902764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 05:53:17.920496 kernel: kvm_amd: TSC scaling supported Jan 15 05:53:17.920551 kernel: kvm_amd: Nested Virtualization enabled Jan 15 05:53:17.920568 kernel: kvm_amd: Nested Paging enabled Jan 15 05:53:17.936923 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 05:53:17.937065 kernel: kvm_amd: PMU virtualization is disabled Jan 15 05:53:18.022736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:53:18.204960 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 05:53:18.222069 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:53:18.237707 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 05:53:18.253147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 05:53:18.269101 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 05:53:18.285634 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 05:53:18.301758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 05:53:18.320512 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 05:53:18.333541 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 05:53:18.344083 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 05:53:18.358790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 05:53:18.358823 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:53:18.369195 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:53:18.400026 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 05:53:18.428657 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 05:53:18.447071 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 05:53:18.463804 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 05:53:18.480977 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 05:53:18.510119 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 05:53:18.526093 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 05:53:18.544548 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 05:53:18.558941 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:53:18.568189 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:53:18.578139 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:53:18.578758 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:53:18.586048 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 05:53:18.606114 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 05:53:18.622534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 05:53:18.638797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 05:53:18.653194 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 05:53:18.669188 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 05:53:18.679005 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 05:53:18.689565 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 05:53:18.711416 jq[1559]: false Jan 15 05:53:18.712222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:53:18.728717 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 05:53:18.746447 extend-filesystems[1560]: Found /dev/vda6 Jan 15 05:53:18.755215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 05:53:18.769598 kernel: EDAC MC: Ver: 3.0.0 Jan 15 05:53:18.777574 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 05:53:18.779466 extend-filesystems[1560]: Found /dev/vda9 Jan 15 05:53:18.789594 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 05:53:18.797838 extend-filesystems[1560]: Checking size of /dev/vda9 Jan 15 05:53:18.816225 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 05:53:18.856828 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 05:53:18.862435 extend-filesystems[1560]: Resized partition /dev/vda9 Jan 15 05:53:18.871177 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 05:53:18.885031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 05:53:18.890095 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 05:53:18.900060 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 05:53:18.915378 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 15 05:53:18.912089 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jan 15 05:53:18.933812 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 05:53:18.949462 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 05:53:18.952587 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 05:53:18.958186 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 15 05:53:18.958415 oslogin_cache_refresh[1561]: Failure getting users, quitting Jan 15 05:53:18.958654 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:53:18.958794 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:53:18.958989 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 15 05:53:18.959690 oslogin_cache_refresh[1561]: Refreshing group entry cache Jan 15 05:53:18.965758 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 05:53:18.966777 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 05:53:18.982617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 05:53:18.984150 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 05:53:19.002722 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 15 05:53:19.002722 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:53:19.000498 oslogin_cache_refresh[1561]: Failure getting groups, quitting Jan 15 05:53:19.000513 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:53:19.012160 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 05:53:19.013661 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 05:53:19.035735 jq[1591]: true Jan 15 05:53:19.036494 extend-filesystems[1590]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 05:53:19.067415 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 05:53:19.105601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 05:53:19.145545 jq[1605]: true Jan 15 05:53:19.155758 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 05:53:19.160797 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 05:53:19.173689 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 05:53:19.178704 update_engine[1589]: I20260115 05:53:19.175107 1589 main.cc:92] Flatcar Update Engine starting Jan 15 05:53:19.223987 tar[1597]: linux-amd64/LICENSE Jan 15 05:53:19.224988 tar[1597]: linux-amd64/helm Jan 15 05:53:19.256614 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 05:53:19.265484 dbus-daemon[1557]: [system] SELinux support is enabled Jan 15 05:53:19.313102 update_engine[1589]: I20260115 05:53:19.305179 1589 update_check_scheduler.cc:74] Next update check in 3m51s Jan 15 05:53:19.265757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 05:53:19.281616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 05:53:19.281642 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 05:53:19.296165 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 05:53:19.296181 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 05:53:19.311571 systemd[1]: Started update-engine.service - Update Engine. Jan 15 05:53:19.322618 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 05:53:19.322618 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 05:53:19.322618 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 05:53:19.366619 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Jan 15 05:53:19.395617 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Jan 15 05:53:19.322845 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 05:53:19.323514 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 05:53:19.366029 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 05:53:19.373089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 05:53:19.374833 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 05:53:19.452835 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 05:53:19.452976 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 05:53:19.457049 systemd-logind[1584]: New seat seat0. Jan 15 05:53:19.465706 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 05:53:19.528423 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 05:53:19.547526 locksmithd[1649]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 05:53:19.602480 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 05:53:19.611004 containerd[1599]: time="2026-01-15T05:53:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 05:53:19.614472 containerd[1599]: time="2026-01-15T05:53:19.614445441Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 05:53:19.618104 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 05:53:19.637162 containerd[1599]: time="2026-01-15T05:53:19.637005262Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.857µs" Jan 15 05:53:19.637162 containerd[1599]: time="2026-01-15T05:53:19.637126758Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 05:53:19.637452 containerd[1599]: time="2026-01-15T05:53:19.637180979Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 05:53:19.637452 containerd[1599]: time="2026-01-15T05:53:19.637193362Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 05:53:19.638075 containerd[1599]: time="2026-01-15T05:53:19.637745343Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 05:53:19.638075 containerd[1599]: time="2026-01-15T05:53:19.637940858Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638075 containerd[1599]: time="2026-01-15T05:53:19.638018743Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638075 containerd[1599]: time="2026-01-15T05:53:19.638030114Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638600 containerd[1599]: time="2026-01-15T05:53:19.638478772Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638600 containerd[1599]: time="2026-01-15T05:53:19.638580652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638667 containerd[1599]: time="2026-01-15T05:53:19.638614124Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638667 containerd[1599]: time="2026-01-15T05:53:19.638622941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638839 containerd[1599]: time="2026-01-15T05:53:19.638813607Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.638839 containerd[1599]: time="2026-01-15T05:53:19.638826551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 05:53:19.639111 containerd[1599]: time="2026-01-15T05:53:19.639030011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.639768 containerd[1599]: time="2026-01-15T05:53:19.639629741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.639816 containerd[1599]: time="2026-01-15T05:53:19.639771045Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:53:19.639816 containerd[1599]: time="2026-01-15T05:53:19.639783798Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 05:53:19.640082 containerd[1599]: time="2026-01-15T05:53:19.639817681Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 05:53:19.640122 containerd[1599]: time="2026-01-15T05:53:19.640108755Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 05:53:19.640602 containerd[1599]: time="2026-01-15T05:53:19.640196508Z" level=info msg="metadata content store policy set" policy=shared Jan 15 05:53:19.656222 containerd[1599]: time="2026-01-15T05:53:19.656181445Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 05:53:19.656222 containerd[1599]: time="2026-01-15T05:53:19.656224956Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:53:19.656496 containerd[1599]: time="2026-01-15T05:53:19.656480714Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:53:19.656496 containerd[1599]: time="2026-01-15T05:53:19.656493247Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656505891Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656523443Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656532881Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656541577Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656557637Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656571984Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656582303Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656592292Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656601579Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 05:53:19.656651 containerd[1599]: time="2026-01-15T05:53:19.656611818Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 05:53:19.656994 containerd[1599]: time="2026-01-15T05:53:19.656725972Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657079000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657100601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657111100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657120678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657129625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657628256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657651719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657667388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657683789Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657961077Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.657995370Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.658049752Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.658068948Z" level=info msg="Start snapshots syncer" Jan 15 05:53:19.658774 containerd[1599]: time="2026-01-15T05:53:19.658199582Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 05:53:19.660152 containerd[1599]: time="2026-01-15T05:53:19.658792639Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 05:53:19.660152 containerd[1599]: time="2026-01-15T05:53:19.658963137Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659108468Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659435599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659475624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659492315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659505800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659520086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659557556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659571713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659584267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659597251Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659732784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659754073Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:53:19.665132 containerd[1599]: time="2026-01-15T05:53:19.659768149Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.659780533Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.659799378Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660151395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660175630Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660191149Z" level=info msg="runtime interface created" Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660199425Z" level=info msg="created NRI interface" Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660214593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660416289Z" level=info msg="Connect containerd service" Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.660444542Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 05:53:19.665653 containerd[1599]: time="2026-01-15T05:53:19.662149424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 05:53:19.667471 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 05:53:19.668012 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 05:53:19.684026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 05:53:19.724977 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 05:53:19.749506 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 05:53:19.766101 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 05:53:19.779160 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 05:53:19.925656 containerd[1599]: time="2026-01-15T05:53:19.925122882Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 05:53:19.926525 containerd[1599]: time="2026-01-15T05:53:19.925792723Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 05:53:19.926525 containerd[1599]: time="2026-01-15T05:53:19.926067921Z" level=info msg="Start subscribing containerd event" Jan 15 05:53:19.926525 containerd[1599]: time="2026-01-15T05:53:19.926208544Z" level=info msg="Start recovering state" Jan 15 05:53:19.930002 containerd[1599]: time="2026-01-15T05:53:19.927077665Z" level=info msg="Start event monitor" Jan 15 05:53:19.930083 containerd[1599]: time="2026-01-15T05:53:19.930063509Z" level=info msg="Start cni network conf syncer for default" Jan 15 05:53:19.930155 containerd[1599]: time="2026-01-15T05:53:19.930140212Z" level=info msg="Start streaming server" Jan 15 05:53:19.930202 containerd[1599]: time="2026-01-15T05:53:19.930190556Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 05:53:19.930746 containerd[1599]: time="2026-01-15T05:53:19.930727768Z" level=info msg="runtime interface starting up..." Jan 15 05:53:19.932174 containerd[1599]: time="2026-01-15T05:53:19.931626707Z" level=info msg="starting plugins..." Jan 15 05:53:19.933695 containerd[1599]: time="2026-01-15T05:53:19.933672285Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 05:53:19.934648 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 05:53:19.943588 containerd[1599]: time="2026-01-15T05:53:19.939544388Z" level=info msg="containerd successfully booted in 0.329327s" Jan 15 05:53:20.199159 tar[1597]: linux-amd64/README.md Jan 15 05:53:20.245107 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 05:53:23.015784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:53:23.031632 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 05:53:23.045162 systemd[1]: Startup finished in 14.252s (kernel) + 25.239s (initrd) + 20.865s (userspace) = 1min 357ms. Jan 15 05:53:23.049081 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:53:26.987161 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 05:53:26.997058 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:38210.service - OpenSSH per-connection server daemon (10.0.0.1:38210). Jan 15 05:53:27.454805 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 38210 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:27.460752 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:27.491055 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 05:53:27.496191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 05:53:27.515892 systemd-logind[1584]: New session 1 of user core. Jan 15 05:53:28.063688 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 05:53:28.072695 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 05:53:28.152583 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:28.171602 systemd-logind[1584]: New session 2 of user core. Jan 15 05:53:28.949054 systemd[1715]: Queued start job for default target default.target. Jan 15 05:53:28.960181 systemd[1715]: Created slice app.slice - User Application Slice. Jan 15 05:53:28.960532 systemd[1715]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 05:53:28.960546 systemd[1715]: Reached target paths.target - Paths. Jan 15 05:53:28.960600 systemd[1715]: Reached target timers.target - Timers. Jan 15 05:53:28.966797 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 05:53:28.969429 systemd[1715]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 05:53:29.063662 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 05:53:29.063793 systemd[1715]: Reached target sockets.target - Sockets. Jan 15 05:53:29.109817 systemd[1715]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 05:53:29.110133 systemd[1715]: Reached target basic.target - Basic System. Jan 15 05:53:29.110638 systemd[1715]: Reached target default.target - Main User Target. Jan 15 05:53:29.110682 systemd[1715]: Startup finished in 871ms. Jan 15 05:53:29.111110 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 05:53:29.131624 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 05:53:29.187627 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:38222.service - OpenSSH per-connection server daemon (10.0.0.1:38222). Jan 15 05:53:29.682510 kubelet[1697]: E0115 05:53:29.681629 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:53:29.687831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:53:29.688771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:53:29.690219 systemd[1]: kubelet.service: Consumed 7.236s CPU time, 263M memory peak. Jan 15 05:53:29.793629 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 38222 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:29.797080 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:29.839479 systemd-logind[1584]: New session 3 of user core. Jan 15 05:53:29.851070 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 05:53:29.916727 sshd[1734]: Connection closed by 10.0.0.1 port 38222 Jan 15 05:53:29.917593 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 15 05:53:29.951509 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:38222.service: Deactivated successfully. Jan 15 05:53:29.956209 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 05:53:29.961453 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Jan 15 05:53:29.968113 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:38226.service - OpenSSH per-connection server daemon (10.0.0.1:38226). Jan 15 05:53:29.970580 systemd-logind[1584]: Removed session 3. Jan 15 05:53:30.148180 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38226 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:30.152741 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:30.169671 systemd-logind[1584]: New session 4 of user core. Jan 15 05:53:30.187841 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 05:53:30.270088 sshd[1744]: Connection closed by 10.0.0.1 port 38226 Jan 15 05:53:30.270145 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 15 05:53:30.286462 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:38226.service: Deactivated successfully. Jan 15 05:53:30.290147 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 05:53:30.293614 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Jan 15 05:53:30.298745 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:38232.service - OpenSSH per-connection server daemon (10.0.0.1:38232). Jan 15 05:53:30.300082 systemd-logind[1584]: Removed session 4. Jan 15 05:53:30.430429 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 38232 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:30.434112 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:30.451186 systemd-logind[1584]: New session 5 of user core. Jan 15 05:53:30.466837 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 05:53:30.513511 sshd[1754]: Connection closed by 10.0.0.1 port 38232 Jan 15 05:53:30.514633 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 15 05:53:30.529943 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:38232.service: Deactivated successfully. Jan 15 05:53:30.533514 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 05:53:30.537092 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Jan 15 05:53:30.541454 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:38248.service - OpenSSH per-connection server daemon (10.0.0.1:38248). Jan 15 05:53:30.544610 systemd-logind[1584]: Removed session 5. Jan 15 05:53:30.696362 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 38248 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:30.700084 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:30.716906 systemd-logind[1584]: New session 6 of user core. Jan 15 05:53:30.727793 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 05:53:32.658173 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 05:53:32.660748 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:53:32.693928 sudo[1766]: pam_unix(sudo:session): session closed for user root Jan 15 05:53:32.727751 sshd[1765]: Connection closed by 10.0.0.1 port 38248 Jan 15 05:53:32.738573 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 15 05:53:32.772733 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:38248.service: Deactivated successfully. Jan 15 05:53:32.779768 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 05:53:32.785836 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Jan 15 05:53:32.794504 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:38252.service - OpenSSH per-connection server daemon (10.0.0.1:38252). Jan 15 05:53:32.798616 systemd-logind[1584]: Removed session 6. Jan 15 05:53:33.027915 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 38252 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:33.033158 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:33.071126 systemd-logind[1584]: New session 7 of user core. Jan 15 05:53:33.093967 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 05:53:33.170855 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 05:53:33.172147 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:53:33.188492 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 15 05:53:33.218761 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 05:53:33.219795 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:53:33.249881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:53:33.488000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:53:33.490940 augenrules[1803]: No rules Jan 15 05:53:33.492798 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:53:33.493785 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:53:33.497135 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 15 05:53:33.498869 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 15 05:53:33.498945 kernel: audit: type=1305 audit(1768456413.488:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:53:33.506793 sshd[1777]: Connection closed by 10.0.0.1 port 38252 Jan 15 05:53:33.488000 audit[1803]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcef072ab0 a2=420 a3=0 items=0 ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:33.525096 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jan 15 05:53:33.586202 kernel: audit: type=1300 audit(1768456413.488:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcef072ab0 a2=420 a3=0 items=0 ppid=1784 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:33.599570 kernel: audit: type=1327 audit(1768456413.488:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:53:33.488000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:53:33.623650 kernel: audit: type=1130 audit(1768456413.493:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.641639 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:38252.service: Deactivated successfully. Jan 15 05:53:33.645882 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 05:53:33.648481 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Jan 15 05:53:33.655132 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:38254.service - OpenSSH per-connection server daemon (10.0.0.1:38254). Jan 15 05:53:33.656648 systemd-logind[1584]: Removed session 7. Jan 15 05:53:33.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.495000 audit[1778]: USER_END pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.771115 kernel: audit: type=1131 audit(1768456413.493:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.789788 kernel: audit: type=1106 audit(1768456413.495:226): pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.790583 kernel: audit: type=1104 audit(1768456413.497:227): pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.497000 audit[1778]: CRED_DISP pid=1778 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.834375 kernel: audit: type=1106 audit(1768456413.527:228): pid=1773 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:33.527000 audit[1773]: USER_END pid=1773 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:33.528000 audit[1773]: CRED_DISP pid=1773 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.030146 kernel: audit: type=1104 audit(1768456413.528:229): pid=1773 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.055576 kernel: audit: type=1131 audit(1768456413.642:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.115:22-10.0.0.1:38252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.115:22-10.0.0.1:38252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:33.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.115:22-10.0.0.1:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:34.666000 audit[1812]: USER_ACCT pid=1812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.668923 sshd[1812]: Accepted publickey for core from 10.0.0.1 port 38254 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:53:34.670000 audit[1812]: CRED_ACQ pid=1812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.670000 audit[1812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed6d3ef10 a2=3 a3=0 items=0 ppid=1 pid=1812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:34.670000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:53:34.673222 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:53:34.723952 systemd-logind[1584]: New session 8 of user core. Jan 15 05:53:34.743796 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 05:53:34.751000 audit[1812]: USER_START pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.761000 audit[1816]: CRED_ACQ pid=1816 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:53:34.824000 audit[1817]: USER_ACCT pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:34.826936 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 05:53:34.826000 audit[1817]: CRED_REFR pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:34.829539 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:53:34.828000 audit[1817]: USER_START pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:53:39.829870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 05:53:39.838660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:53:44.595015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:53:44.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:44.605918 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 15 05:53:44.606015 kernel: audit: type=1130 audit(1768456424.595:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:44.651947 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:53:45.906932 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 05:53:46.265012 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 05:53:46.882630 kubelet[1847]: E0115 05:53:46.881864 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:53:46.894080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:53:46.894880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:53:46.896562 systemd[1]: kubelet.service: Consumed 5.227s CPU time, 110.5M memory peak. Jan 15 05:53:46.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:53:46.946827 kernel: audit: type=1131 audit(1768456426.894:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:53:54.763709 dockerd[1855]: time="2026-01-15T05:53:54.761700352Z" level=info msg="Starting up" Jan 15 05:53:54.774163 dockerd[1855]: time="2026-01-15T05:53:54.773822535Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 05:53:55.069608 dockerd[1855]: time="2026-01-15T05:53:55.067734768Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 05:53:55.899738 dockerd[1855]: time="2026-01-15T05:53:55.898771076Z" level=info msg="Loading containers: start." Jan 15 05:53:55.982677 kernel: Initializing XFRM netlink socket Jan 15 05:53:56.604000 audit[1911]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.684856 kernel: audit: type=1325 audit(1768456436.604:242): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.685146 kernel: audit: type=1300 audit(1768456436.604:242): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd58bf1470 a2=0 a3=0 items=0 ppid=1855 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.604000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd58bf1470 a2=0 a3=0 items=0 ppid=1855 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.685929 kernel: audit: type=1327 audit(1768456436.604:242): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:53:56.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:53:56.622000 audit[1913]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.736714 kernel: audit: type=1325 audit(1768456436.622:243): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.736803 kernel: audit: type=1300 audit(1768456436.622:243): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff5adc13a0 a2=0 a3=0 items=0 ppid=1855 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.622000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff5adc13a0 a2=0 a3=0 items=0 ppid=1855 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.788599 kernel: audit: type=1327 audit(1768456436.622:243): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:53:56.622000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:53:56.639000 audit[1915]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.831722 kernel: audit: type=1325 audit(1768456436.639:244): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.831861 kernel: audit: type=1300 audit(1768456436.639:244): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9dc21880 a2=0 a3=0 items=0 ppid=1855 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.639000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9dc21880 a2=0 a3=0 items=0 ppid=1855 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.871705 kernel: audit: type=1327 audit(1768456436.639:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:53:56.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:53:56.655000 audit[1917]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.912745 kernel: audit: type=1325 audit(1768456436.655:245): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.655000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb6e16c80 a2=0 a3=0 items=0 ppid=1855 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.655000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:53:56.674000 audit[1919]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.674000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdda01a2e0 a2=0 a3=0 items=0 ppid=1855 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:53:56.691000 audit[1921]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.691000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb47a8cf0 a2=0 a3=0 items=0 ppid=1855 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:53:56.717000 audit[1923]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.717000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffd6d0f5a0 a2=0 a3=0 items=0 ppid=1855 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:53:56.736000 audit[1925]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:56.736000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff8cfc2290 a2=0 a3=0 items=0 ppid=1855 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:56.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:53:57.042000 audit[1928]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.042000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc97f7bb10 a2=0 a3=0 items=0 ppid=1855 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 05:53:57.062000 audit[1930]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.062000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdf15f9cd0 a2=0 a3=0 items=0 ppid=1855 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:53:57.081458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 05:53:57.094817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:53:57.096000 audit[1932]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.096000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffce43d0540 a2=0 a3=0 items=0 ppid=1855 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:53:57.132000 audit[1935]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.132000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd48796a30 a2=0 a3=0 items=0 ppid=1855 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:53:57.162000 audit[1937]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.162000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe84091fc0 a2=0 a3=0 items=0 ppid=1855 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:53:57.615000 audit[1969]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.615000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcb8f0f010 a2=0 a3=0 items=0 ppid=1855 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:53:57.633000 audit[1971]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.633000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdd9400dd0 a2=0 a3=0 items=0 ppid=1855 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:53:57.648000 audit[1973]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.648000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe724c3eb0 a2=0 a3=0 items=0 ppid=1855 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:53:57.663000 audit[1975]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.663000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe59d25f40 a2=0 a3=0 items=0 ppid=1855 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:53:57.676000 audit[1977]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.676000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff537e2920 a2=0 a3=0 items=0 ppid=1855 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:53:57.692000 audit[1979]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.692000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3460e780 a2=0 a3=0 items=0 ppid=1855 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:53:57.710000 audit[1981]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.710000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffece232de0 a2=0 a3=0 items=0 ppid=1855 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:53:57.735000 audit[1983]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.735000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffce655d4a0 a2=0 a3=0 items=0 ppid=1855 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.735000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:53:57.756000 audit[1985]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.756000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd18c7f0b0 a2=0 a3=0 items=0 ppid=1855 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 05:53:57.771000 audit[1987]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.771000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffeda88370 a2=0 a3=0 items=0 ppid=1855 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.771000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:53:57.792000 audit[1989]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.792000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcb59d4f50 a2=0 a3=0 items=0 ppid=1855 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:53:57.814000 audit[1991]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.814000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd12da3be0 a2=0 a3=0 items=0 ppid=1855 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:53:57.829000 audit[1993]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.829000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc56799fd0 a2=0 a3=0 items=0 ppid=1855 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:53:57.884000 audit[1998]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.884000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff2344b190 a2=0 a3=0 items=0 ppid=1855 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:53:57.899000 audit[2000]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.899000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffddd55dcc0 a2=0 a3=0 items=0 ppid=1855 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:53:57.912000 audit[2002]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:57.912000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeca6e6ed0 a2=0 a3=0 items=0 ppid=1855 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.912000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:53:57.928000 audit[2004]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.928000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda75ebe70 a2=0 a3=0 items=0 ppid=1855 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:53:57.946000 audit[2006]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.946000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffecd82f310 a2=0 a3=0 items=0 ppid=1855 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:53:57.962000 audit[2008]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:53:57.962000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff0c7e9a40 a2=0 a3=0 items=0 ppid=1855 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:57.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:53:58.151000 audit[2016]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.151000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcd6c248e0 a2=0 a3=0 items=0 ppid=1855 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 05:53:58.171000 audit[2018]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.171000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd0b201590 a2=0 a3=0 items=0 ppid=1855 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 05:53:58.179797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:53:58.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:53:58.238473 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:53:58.245000 audit[2028]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.245000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd37494030 a2=0 a3=0 items=0 ppid=1855 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 05:53:58.313000 audit[2039]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.313000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffce91bed10 a2=0 a3=0 items=0 ppid=1855 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 05:53:58.330000 audit[2041]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.330000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd90a9ee40 a2=0 a3=0 items=0 ppid=1855 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 05:53:58.347000 audit[2043]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.347000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc97001110 a2=0 a3=0 items=0 ppid=1855 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 05:53:58.365000 audit[2046]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.365000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff127d3570 a2=0 a3=0 items=0 ppid=1855 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:53:58.384000 audit[2048]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:53:58.384000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8d81a990 a2=0 a3=0 items=0 ppid=1855 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:53:58.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 05:53:58.387806 systemd-networkd[1494]: docker0: Link UP Jan 15 05:53:58.480888 dockerd[1855]: time="2026-01-15T05:53:58.477468056Z" level=info msg="Loading containers: done." Jan 15 05:53:59.069916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck703263835-merged.mount: Deactivated successfully. Jan 15 05:53:59.082090 kubelet[2021]: E0115 05:53:59.081921 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:53:59.085714 dockerd[1855]: time="2026-01-15T05:53:59.085523266Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 05:53:59.088897 dockerd[1855]: time="2026-01-15T05:53:59.086504789Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 05:53:59.163628 dockerd[1855]: time="2026-01-15T05:53:59.162584788Z" level=info msg="Initializing buildkit" Jan 15 05:53:59.168634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:53:59.168874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:53:59.174114 systemd[1]: kubelet.service: Consumed 1.736s CPU time, 110.9M memory peak. Jan 15 05:53:59.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:53:59.763136 dockerd[1855]: time="2026-01-15T05:53:59.762726492Z" level=info msg="Completed buildkit initialization" Jan 15 05:53:59.838077 dockerd[1855]: time="2026-01-15T05:53:59.837859508Z" level=info msg="Daemon has completed initialization" Jan 15 05:53:59.840550 dockerd[1855]: time="2026-01-15T05:53:59.838764414Z" level=info msg="API listen on /run/docker.sock" Jan 15 05:53:59.842042 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 05:53:59.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:03.009651 containerd[1599]: time="2026-01-15T05:54:03.008537428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 15 05:54:04.075536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290326889.mount: Deactivated successfully. Jan 15 05:54:04.692835 update_engine[1589]: I20260115 05:54:04.690477 1589 update_attempter.cc:509] Updating boot flags... Jan 15 05:54:10.879724 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3456108445 wd_nsec: 3456106881 Jan 15 05:54:10.905567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 05:54:10.935794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:54:13.192209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:54:13.465052 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 15 05:54:13.465616 kernel: audit: type=1130 audit(1768456453.230:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:13.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:15.387006 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:54:16.146010 kubelet[2177]: E0115 05:54:16.145561 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:54:16.153212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:54:16.154050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:54:16.155519 systemd[1]: kubelet.service: Consumed 4.434s CPU time, 110.3M memory peak. Jan 15 05:54:16.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:16.194787 kernel: audit: type=1131 audit(1768456456.154:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:16.440662 containerd[1599]: time="2026-01-15T05:54:16.440077141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:16.442861 containerd[1599]: time="2026-01-15T05:54:16.442595323Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25517145" Jan 15 05:54:16.450784 containerd[1599]: time="2026-01-15T05:54:16.450733879Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:16.462177 containerd[1599]: time="2026-01-15T05:54:16.462098994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:16.467634 containerd[1599]: time="2026-01-15T05:54:16.463799134Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 13.455077926s" Jan 15 05:54:16.467634 containerd[1599]: time="2026-01-15T05:54:16.464015797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 15 05:54:16.524507 containerd[1599]: time="2026-01-15T05:54:16.524026921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 15 05:54:24.650811 containerd[1599]: time="2026-01-15T05:54:24.650190094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:24.652475 containerd[1599]: time="2026-01-15T05:54:24.652381138Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 15 05:54:24.656989 containerd[1599]: time="2026-01-15T05:54:24.656801320Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:24.666452 containerd[1599]: time="2026-01-15T05:54:24.665766298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:24.668447 containerd[1599]: time="2026-01-15T05:54:24.667853123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.143775078s" Jan 15 05:54:24.668447 containerd[1599]: time="2026-01-15T05:54:24.668171546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 15 05:54:24.674988 containerd[1599]: time="2026-01-15T05:54:24.674659863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 15 05:54:26.356991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 05:54:26.478638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:54:27.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:27.878399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:54:27.896506 kernel: audit: type=1130 audit(1768456467.878:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:27.938100 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:54:29.141468 kubelet[2201]: E0115 05:54:29.141184 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:54:29.172496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:54:29.173130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:54:29.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:29.174662 systemd[1]: kubelet.service: Consumed 2.122s CPU time, 112.1M memory peak. Jan 15 05:54:29.194486 kernel: audit: type=1131 audit(1768456469.174:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:30.902589 containerd[1599]: time="2026-01-15T05:54:30.902144032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:30.904684 containerd[1599]: time="2026-01-15T05:54:30.904133195Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 15 05:54:30.906703 containerd[1599]: time="2026-01-15T05:54:30.906672017Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:30.919865 containerd[1599]: time="2026-01-15T05:54:30.919689276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:30.921072 containerd[1599]: time="2026-01-15T05:54:30.921034432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 6.246276486s" Jan 15 05:54:30.923484 containerd[1599]: time="2026-01-15T05:54:30.921397459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 15 05:54:30.924164 containerd[1599]: time="2026-01-15T05:54:30.923669097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 15 05:54:35.499515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205157072.mount: Deactivated successfully. Jan 15 05:54:38.239765 containerd[1599]: time="2026-01-15T05:54:38.238676254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:38.244215 containerd[1599]: time="2026-01-15T05:54:38.241907054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 15 05:54:38.250511 containerd[1599]: time="2026-01-15T05:54:38.250210442Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:38.253750 containerd[1599]: time="2026-01-15T05:54:38.253487526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:38.255329 containerd[1599]: time="2026-01-15T05:54:38.255099412Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 7.331208371s" Jan 15 05:54:38.255329 containerd[1599]: time="2026-01-15T05:54:38.255200200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 15 05:54:38.258769 containerd[1599]: time="2026-01-15T05:54:38.258457775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 15 05:54:39.329425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 05:54:39.339161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:54:39.734214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570321432.mount: Deactivated successfully. Jan 15 05:54:40.288573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:54:40.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:40.325493 kernel: audit: type=1130 audit(1768456480.288:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:40.332984 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:54:40.519710 kubelet[2239]: E0115 05:54:40.519121 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:54:40.523869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:54:40.524499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:54:40.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:40.525764 systemd[1]: kubelet.service: Consumed 895ms CPU time, 108.7M memory peak. Jan 15 05:54:40.543464 kernel: audit: type=1131 audit(1768456480.524:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:42.328368 containerd[1599]: time="2026-01-15T05:54:42.327836748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.331204 containerd[1599]: time="2026-01-15T05:54:42.331170547Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21653471" Jan 15 05:54:42.333466 containerd[1599]: time="2026-01-15T05:54:42.333194932Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.340727 containerd[1599]: time="2026-01-15T05:54:42.340410478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.341772 containerd[1599]: time="2026-01-15T05:54:42.341669753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.083171052s" Jan 15 05:54:42.341814 containerd[1599]: time="2026-01-15T05:54:42.341772034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 15 05:54:42.344907 containerd[1599]: time="2026-01-15T05:54:42.344721384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 15 05:54:42.847689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772477686.mount: Deactivated successfully. Jan 15 05:54:42.886136 containerd[1599]: time="2026-01-15T05:54:42.885867902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.889504 containerd[1599]: time="2026-01-15T05:54:42.889136524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 15 05:54:42.892813 containerd[1599]: time="2026-01-15T05:54:42.892524945Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.895852 containerd[1599]: time="2026-01-15T05:54:42.895804201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:42.896969 containerd[1599]: time="2026-01-15T05:54:42.896861190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 552.106644ms" Jan 15 05:54:42.896969 containerd[1599]: time="2026-01-15T05:54:42.896957530Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 15 05:54:42.897948 containerd[1599]: time="2026-01-15T05:54:42.897882425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 15 05:54:43.446770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348306655.mount: Deactivated successfully. Jan 15 05:54:50.578653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 15 05:54:50.592681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:54:51.677830 containerd[1599]: time="2026-01-15T05:54:51.676500828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:51.682074 containerd[1599]: time="2026-01-15T05:54:51.681842033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 15 05:54:51.685711 containerd[1599]: time="2026-01-15T05:54:51.685675203Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:51.691973 containerd[1599]: time="2026-01-15T05:54:51.691946602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:54:51.694199 containerd[1599]: time="2026-01-15T05:54:51.693949399Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 8.796039203s" Jan 15 05:54:51.694199 containerd[1599]: time="2026-01-15T05:54:51.694087507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 15 05:54:52.245814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:54:52.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:52.275652 kernel: audit: type=1130 audit(1768456492.245:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:52.286966 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:54:53.398701 kubelet[2362]: E0115 05:54:53.398505 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:54:53.405923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:54:53.406711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:54:53.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:53.409030 systemd[1]: kubelet.service: Consumed 2.392s CPU time, 109M memory peak. Jan 15 05:54:53.447526 kernel: audit: type=1131 audit(1768456493.407:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:54:59.636941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:54:59.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:59.637850 systemd[1]: kubelet.service: Consumed 2.392s CPU time, 109M memory peak. Jan 15 05:54:59.642834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:54:59.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:59.690822 kernel: audit: type=1130 audit(1768456499.636:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:59.693112 kernel: audit: type=1131 audit(1768456499.636:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:54:59.747945 systemd[1]: Reload requested from client PID 2392 ('systemctl') (unit session-8.scope)... Jan 15 05:54:59.748051 systemd[1]: Reloading... Jan 15 05:54:59.971488 zram_generator::config[2437]: No configuration found. Jan 15 05:55:00.365779 systemd[1]: Reloading finished in 616 ms. Jan 15 05:55:00.413000 audit: BPF prog-id=61 op=LOAD Jan 15 05:55:00.435640 kernel: audit: type=1334 audit(1768456500.413:295): prog-id=61 op=LOAD Jan 15 05:55:00.435852 kernel: audit: type=1334 audit(1768456500.414:296): prog-id=56 op=UNLOAD Jan 15 05:55:00.414000 audit: BPF prog-id=56 op=UNLOAD Jan 15 05:55:00.436576 kernel: audit: type=1334 audit(1768456500.416:297): prog-id=62 op=LOAD Jan 15 05:55:00.416000 audit: BPF prog-id=62 op=LOAD Jan 15 05:55:00.444451 kernel: audit: type=1334 audit(1768456500.416:298): prog-id=41 op=UNLOAD Jan 15 05:55:00.416000 audit: BPF prog-id=41 op=UNLOAD Jan 15 05:55:00.416000 audit: BPF prog-id=63 op=LOAD Jan 15 05:55:00.459777 kernel: audit: type=1334 audit(1768456500.416:299): prog-id=63 op=LOAD Jan 15 05:55:00.459861 kernel: audit: type=1334 audit(1768456500.416:300): prog-id=64 op=LOAD Jan 15 05:55:00.416000 audit: BPF prog-id=64 op=LOAD Jan 15 05:55:00.467763 kernel: audit: type=1334 audit(1768456500.416:301): prog-id=42 op=UNLOAD Jan 15 05:55:00.416000 audit: BPF prog-id=42 op=UNLOAD Jan 15 05:55:00.475667 kernel: audit: type=1334 audit(1768456500.416:302): prog-id=43 op=UNLOAD Jan 15 05:55:00.416000 audit: BPF prog-id=43 op=UNLOAD Jan 15 05:55:00.424000 audit: BPF prog-id=65 op=LOAD Jan 15 05:55:00.424000 audit: BPF prog-id=58 op=UNLOAD Jan 15 05:55:00.424000 audit: BPF prog-id=66 op=LOAD Jan 15 05:55:00.424000 audit: BPF prog-id=67 op=LOAD Jan 15 05:55:00.424000 audit: BPF prog-id=59 op=UNLOAD Jan 15 05:55:00.424000 audit: BPF prog-id=60 op=UNLOAD Jan 15 05:55:00.426000 audit: BPF prog-id=68 op=LOAD Jan 15 05:55:00.426000 audit: BPF prog-id=69 op=LOAD Jan 15 05:55:00.426000 audit: BPF prog-id=54 op=UNLOAD Jan 15 05:55:00.426000 audit: BPF prog-id=55 op=UNLOAD Jan 15 05:55:00.427000 audit: BPF prog-id=70 op=LOAD Jan 15 05:55:00.427000 audit: BPF prog-id=47 op=UNLOAD Jan 15 05:55:00.427000 audit: BPF prog-id=71 op=LOAD Jan 15 05:55:00.427000 audit: BPF prog-id=72 op=LOAD Jan 15 05:55:00.427000 audit: BPF prog-id=48 op=UNLOAD Jan 15 05:55:00.427000 audit: BPF prog-id=49 op=UNLOAD Jan 15 05:55:00.429000 audit: BPF prog-id=73 op=LOAD Jan 15 05:55:00.429000 audit: BPF prog-id=44 op=UNLOAD Jan 15 05:55:00.429000 audit: BPF prog-id=74 op=LOAD Jan 15 05:55:00.429000 audit: BPF prog-id=75 op=LOAD Jan 15 05:55:00.429000 audit: BPF prog-id=45 op=UNLOAD Jan 15 05:55:00.429000 audit: BPF prog-id=46 op=UNLOAD Jan 15 05:55:00.430000 audit: BPF prog-id=76 op=LOAD Jan 15 05:55:00.430000 audit: BPF prog-id=57 op=UNLOAD Jan 15 05:55:00.431000 audit: BPF prog-id=77 op=LOAD Jan 15 05:55:00.431000 audit: BPF prog-id=51 op=UNLOAD Jan 15 05:55:00.432000 audit: BPF prog-id=78 op=LOAD Jan 15 05:55:00.432000 audit: BPF prog-id=79 op=LOAD Jan 15 05:55:00.432000 audit: BPF prog-id=52 op=UNLOAD Jan 15 05:55:00.432000 audit: BPF prog-id=53 op=UNLOAD Jan 15 05:55:00.435000 audit: BPF prog-id=80 op=LOAD Jan 15 05:55:00.483000 audit: BPF prog-id=50 op=UNLOAD Jan 15 05:55:00.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:55:00.521740 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 05:55:00.521867 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 05:55:00.522673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:55:00.522747 systemd[1]: kubelet.service: Consumed 413ms CPU time, 98.3M memory peak. Jan 15 05:55:00.528719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:55:00.966874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:55:00.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:55:00.995756 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:55:01.404599 kubelet[2485]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:55:01.404599 kubelet[2485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:55:01.409037 kubelet[2485]: I0115 05:55:01.408822 2485 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:55:02.792894 kubelet[2485]: I0115 05:55:02.791751 2485 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 15 05:55:02.792894 kubelet[2485]: I0115 05:55:02.792627 2485 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:55:02.795945 kubelet[2485]: I0115 05:55:02.793569 2485 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 15 05:55:02.795945 kubelet[2485]: I0115 05:55:02.793590 2485 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:55:02.795945 kubelet[2485]: I0115 05:55:02.795664 2485 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 05:55:02.858087 kubelet[2485]: E0115 05:55:02.857472 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 15 05:55:02.859412 kubelet[2485]: I0115 05:55:02.859059 2485 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:55:02.885750 kubelet[2485]: I0115 05:55:02.885490 2485 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:55:02.972783 kubelet[2485]: I0115 05:55:02.972502 2485 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 15 05:55:02.976698 kubelet[2485]: I0115 05:55:02.976422 2485 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:55:02.978652 kubelet[2485]: I0115 05:55:02.976604 2485 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:55:02.979612 kubelet[2485]: I0115 05:55:02.979127 2485 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:55:02.979612 kubelet[2485]: I0115 05:55:02.979447 2485 container_manager_linux.go:306] "Creating device plugin manager" Jan 15 05:55:02.980354 kubelet[2485]: I0115 05:55:02.980056 2485 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 15 05:55:02.990504 kubelet[2485]: I0115 05:55:02.990002 2485 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:55:02.993049 kubelet[2485]: I0115 05:55:02.992916 2485 kubelet.go:475] "Attempting to sync node with API server" Jan 15 05:55:02.993049 kubelet[2485]: I0115 05:55:02.993019 2485 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:55:02.993788 kubelet[2485]: I0115 05:55:02.993671 2485 kubelet.go:387] "Adding apiserver pod source" Jan 15 05:55:02.994416 kubelet[2485]: I0115 05:55:02.994050 2485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:55:02.998606 kubelet[2485]: E0115 05:55:02.998398 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 05:55:02.998606 kubelet[2485]: E0115 05:55:02.998416 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 05:55:03.021123 kubelet[2485]: I0115 05:55:03.020039 2485 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:55:03.023998 kubelet[2485]: I0115 05:55:03.023858 2485 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 05:55:03.023998 kubelet[2485]: I0115 05:55:03.023963 2485 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 15 05:55:03.026070 kubelet[2485]: W0115 05:55:03.025688 2485 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 05:55:03.052892 kubelet[2485]: I0115 05:55:03.052636 2485 server.go:1262] "Started kubelet" Jan 15 05:55:03.055594 kubelet[2485]: I0115 05:55:03.054816 2485 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:55:03.056509 kubelet[2485]: I0115 05:55:03.056123 2485 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:55:03.056586 kubelet[2485]: I0115 05:55:03.056523 2485 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 15 05:55:03.057576 kubelet[2485]: I0115 05:55:03.057119 2485 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:55:03.059335 kubelet[2485]: I0115 05:55:03.059122 2485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:55:03.064656 kubelet[2485]: I0115 05:55:03.063958 2485 server.go:310] "Adding debug handlers to kubelet server" Jan 15 05:55:03.073863 kubelet[2485]: I0115 05:55:03.073753 2485 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:55:03.076905 kubelet[2485]: E0115 05:55:03.076585 2485 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:55:03.077590 kubelet[2485]: I0115 05:55:03.077469 2485 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 15 05:55:03.079062 kubelet[2485]: I0115 05:55:03.078961 2485 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 05:55:03.079914 kubelet[2485]: I0115 05:55:03.079801 2485 reconciler.go:29] "Reconciler: start to sync state" Jan 15 05:55:03.080635 kubelet[2485]: E0115 05:55:03.073073 2485 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ad1d1e12629c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 05:55:03.052085698 +0000 UTC m=+2.025079153,LastTimestamp:2026-01-15 05:55:03.052085698 +0000 UTC m=+2.025079153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 05:55:03.081772 kubelet[2485]: E0115 05:55:03.080846 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 05:55:03.083011 kubelet[2485]: E0115 05:55:03.082811 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jan 15 05:55:03.088455 kubelet[2485]: I0115 05:55:03.087895 2485 factory.go:223] Registration of the systemd container factory successfully Jan 15 05:55:03.088833 kubelet[2485]: I0115 05:55:03.088572 2485 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:55:03.090991 kubelet[2485]: I0115 05:55:03.090825 2485 factory.go:223] Registration of the containerd container factory successfully Jan 15 05:55:03.092907 kubelet[2485]: E0115 05:55:03.092855 2485 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 05:55:03.143000 audit[2505]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.143000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdab5f6de0 a2=0 a3=0 items=0 ppid=2485 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:55:03.154000 audit[2506]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.154000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe07bb7640 a2=0 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:55:03.170000 audit[2508]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.170000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdde7e25a0 a2=0 a3=0 items=0 ppid=2485 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:55:03.177853 kubelet[2485]: E0115 05:55:03.177608 2485 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:55:03.188840 kubelet[2485]: I0115 05:55:03.187737 2485 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:55:03.188840 kubelet[2485]: I0115 05:55:03.187753 2485 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:55:03.188840 kubelet[2485]: I0115 05:55:03.187842 2485 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:55:03.189000 audit[2513]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.189000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcecbbe780 a2=0 a3=0 items=0 ppid=2485 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:55:03.247864 kubelet[2485]: I0115 05:55:03.247689 2485 policy_none.go:49] "None policy: Start" Jan 15 05:55:03.247991 kubelet[2485]: I0115 05:55:03.247980 2485 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 15 05:55:03.248016 kubelet[2485]: I0115 05:55:03.248006 2485 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 15 05:55:03.252565 kubelet[2485]: I0115 05:55:03.252107 2485 policy_none.go:47] "Start" Jan 15 05:55:03.269802 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 05:55:03.269000 audit[2516]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.269000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe5be4e410 a2=0 a3=0 items=0 ppid=2485 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 15 05:55:03.272084 kubelet[2485]: I0115 05:55:03.271816 2485 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 15 05:55:03.276000 audit[2518]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:03.276000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe04610d20 a2=0 a3=0 items=0 ppid=2485 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:55:03.278886 kubelet[2485]: I0115 05:55:03.278776 2485 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 15 05:55:03.277000 audit[2519]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.277000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff00695440 a2=0 a3=0 items=0 ppid=2485 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:55:03.279624 kubelet[2485]: I0115 05:55:03.279063 2485 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 15 05:55:03.279845 kubelet[2485]: I0115 05:55:03.279719 2485 kubelet.go:2427] "Starting kubelet main sync loop" Jan 15 05:55:03.279895 kubelet[2485]: E0115 05:55:03.279846 2485 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:55:03.283563 kubelet[2485]: E0115 05:55:03.282776 2485 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:55:03.284654 kubelet[2485]: E0115 05:55:03.284550 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 05:55:03.283000 audit[2520]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.286716 kubelet[2485]: E0115 05:55:03.285619 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jan 15 05:55:03.283000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd456c6860 a2=0 a3=0 items=0 ppid=2485 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:55:03.287000 audit[2521]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:03.287000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8acb7710 a2=0 a3=0 items=0 ppid=2485 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:55:03.292000 audit[2522]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:03.292000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddf2b10e0 a2=0 a3=0 items=0 ppid=2485 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:55:03.296000 audit[2523]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:03.296000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc9be6ee0 a2=0 a3=0 items=0 ppid=2485 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:55:03.299053 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 05:55:03.303000 audit[2524]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:03.303000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe98f16850 a2=0 a3=0 items=0 ppid=2485 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:03.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:55:03.314047 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 05:55:03.322491 kubelet[2485]: E0115 05:55:03.321838 2485 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 05:55:03.323099 kubelet[2485]: I0115 05:55:03.322696 2485 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:55:03.323099 kubelet[2485]: I0115 05:55:03.322917 2485 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:55:03.330101 kubelet[2485]: E0115 05:55:03.329975 2485 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:55:03.330847 kubelet[2485]: I0115 05:55:03.330742 2485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:55:03.330847 kubelet[2485]: E0115 05:55:03.330838 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 05:55:03.421570 systemd[1]: Created slice kubepods-burstable-pod6070624aceef11fcaa40c8b252b8055b.slice - libcontainer container kubepods-burstable-pod6070624aceef11fcaa40c8b252b8055b.slice. Jan 15 05:55:03.429628 kubelet[2485]: I0115 05:55:03.429024 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:03.431141 kubelet[2485]: E0115 05:55:03.431003 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 15 05:55:03.438569 kubelet[2485]: E0115 05:55:03.438114 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:03.442937 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 15 05:55:03.464506 kubelet[2485]: E0115 05:55:03.462804 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:03.476568 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 15 05:55:03.483500 kubelet[2485]: E0115 05:55:03.483043 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:03.488070 kubelet[2485]: I0115 05:55:03.487857 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:03.488070 kubelet[2485]: I0115 05:55:03.487982 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:03.489743 kubelet[2485]: I0115 05:55:03.488419 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:03.489743 kubelet[2485]: I0115 05:55:03.489520 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:03.489868 kubelet[2485]: I0115 05:55:03.489847 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:03.490100 kubelet[2485]: I0115 05:55:03.489875 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:03.491478 kubelet[2485]: I0115 05:55:03.490521 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:03.492715 kubelet[2485]: I0115 05:55:03.491876 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:03.492715 kubelet[2485]: I0115 05:55:03.492646 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:03.643548 kubelet[2485]: I0115 05:55:03.643446 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:03.647610 kubelet[2485]: E0115 05:55:03.646833 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 15 05:55:03.688745 kubelet[2485]: E0115 05:55:03.688589 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jan 15 05:55:03.749600 kubelet[2485]: E0115 05:55:03.748853 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:03.755792 containerd[1599]: time="2026-01-15T05:55:03.755684642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6070624aceef11fcaa40c8b252b8055b,Namespace:kube-system,Attempt:0,}" Jan 15 05:55:03.770690 kubelet[2485]: E0115 05:55:03.770546 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:03.772526 containerd[1599]: time="2026-01-15T05:55:03.772433693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 15 05:55:03.789946 kubelet[2485]: E0115 05:55:03.789847 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:03.793558 containerd[1599]: time="2026-01-15T05:55:03.793105671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 15 05:55:03.981719 kubelet[2485]: E0115 05:55:03.980966 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 05:55:04.051995 kubelet[2485]: I0115 05:55:04.051616 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:04.051995 kubelet[2485]: E0115 05:55:04.051980 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 15 05:55:04.187605 kubelet[2485]: E0115 05:55:04.186606 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 05:55:04.274991 kubelet[2485]: E0115 05:55:04.274521 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 05:55:04.372037 kubelet[2485]: E0115 05:55:04.371752 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 05:55:04.489737 kubelet[2485]: E0115 05:55:04.489685 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jan 15 05:55:04.634097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990258883.mount: Deactivated successfully. Jan 15 05:55:04.647381 containerd[1599]: time="2026-01-15T05:55:04.647022104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:55:04.654533 containerd[1599]: time="2026-01-15T05:55:04.654065983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 05:55:04.660112 containerd[1599]: time="2026-01-15T05:55:04.659021269Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:55:04.662847 containerd[1599]: time="2026-01-15T05:55:04.662694881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:55:04.664769 containerd[1599]: time="2026-01-15T05:55:04.664702610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:55:04.667950 containerd[1599]: time="2026-01-15T05:55:04.667618102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 05:55:04.670667 containerd[1599]: time="2026-01-15T05:55:04.670558564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 05:55:04.674004 containerd[1599]: time="2026-01-15T05:55:04.673897900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:55:04.674907 containerd[1599]: time="2026-01-15T05:55:04.674798597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 911.241932ms" Jan 15 05:55:04.683616 containerd[1599]: time="2026-01-15T05:55:04.683000644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 905.199835ms" Jan 15 05:55:04.684008 containerd[1599]: time="2026-01-15T05:55:04.683882345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 884.21758ms" Jan 15 05:55:04.750376 containerd[1599]: time="2026-01-15T05:55:04.749439372Z" level=info msg="connecting to shim b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d" address="unix:///run/containerd/s/16b442da19a157e94239aa60fa43aef6a148cbddbe3bf3b43265bd7aa875a6c0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:55:04.766543 containerd[1599]: time="2026-01-15T05:55:04.766087778Z" level=info msg="connecting to shim 52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034" address="unix:///run/containerd/s/10a811142923977a1f389573bb93ac0389a009441e78588b14d8dfba8a70ee42" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:55:04.838768 systemd[1]: Started cri-containerd-52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034.scope - libcontainer container 52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034. Jan 15 05:55:04.858791 kubelet[2485]: I0115 05:55:04.857974 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:04.859150 kubelet[2485]: E0115 05:55:04.858938 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 15 05:55:04.865695 kubelet[2485]: E0115 05:55:04.865662 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 15 05:55:04.867730 containerd[1599]: time="2026-01-15T05:55:04.867549432Z" level=info msg="connecting to shim 2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60" address="unix:///run/containerd/s/c336558d4d80a894ee70aa328985fbb462762f25c303351e4ca41ccf54914727" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:55:04.884725 systemd[1]: Started cri-containerd-b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d.scope - libcontainer container b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d. Jan 15 05:55:04.915410 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 15 05:55:04.915538 kernel: audit: type=1334 audit(1768456504.896:349): prog-id=81 op=LOAD Jan 15 05:55:04.896000 audit: BPF prog-id=81 op=LOAD Jan 15 05:55:04.901000 audit: BPF prog-id=82 op=LOAD Jan 15 05:55:04.925567 kernel: audit: type=1334 audit(1768456504.901:350): prog-id=82 op=LOAD Jan 15 05:55:04.925681 kernel: audit: type=1300 audit(1768456504.901:350): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.901000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.984799 kernel: audit: type=1327 audit(1768456504.901:350): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.984884 kernel: audit: type=1334 audit(1768456504.901:351): prog-id=82 op=UNLOAD Jan 15 05:55:04.901000 audit: BPF prog-id=82 op=UNLOAD Jan 15 05:55:04.993547 kernel: audit: type=1300 audit(1768456504.901:351): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.901000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:05.053637 kernel: audit: type=1327 audit(1768456504.901:351): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:05.053716 kernel: audit: type=1334 audit(1768456504.902:352): prog-id=83 op=LOAD Jan 15 05:55:04.902000 audit: BPF prog-id=83 op=LOAD Jan 15 05:55:04.902000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.095487 kernel: audit: type=1300 audit(1768456504.902:352): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.065059 systemd[1]: Started cri-containerd-2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60.scope - libcontainer container 2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60. Jan 15 05:55:04.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:05.131435 kernel: audit: type=1327 audit(1768456504.902:352): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.902000 audit: BPF prog-id=84 op=LOAD Jan 15 05:55:04.902000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.902000 audit: BPF prog-id=84 op=UNLOAD Jan 15 05:55:04.902000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.902000 audit: BPF prog-id=83 op=UNLOAD Jan 15 05:55:04.902000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.902000 audit: BPF prog-id=85 op=LOAD Jan 15 05:55:04.902000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2547 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532333939303339613866313339633138383837666136323939356237 Jan 15 05:55:04.938000 audit: BPF prog-id=86 op=LOAD Jan 15 05:55:04.939000 audit: BPF prog-id=87 op=LOAD Jan 15 05:55:04.939000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.940000 audit: BPF prog-id=87 op=UNLOAD Jan 15 05:55:04.940000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.943000 audit: BPF prog-id=88 op=LOAD Jan 15 05:55:04.943000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.946000 audit: BPF prog-id=89 op=LOAD Jan 15 05:55:04.946000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.946000 audit: BPF prog-id=89 op=UNLOAD Jan 15 05:55:04.946000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.946000 audit: BPF prog-id=88 op=UNLOAD Jan 15 05:55:04.946000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:04.946000 audit: BPF prog-id=90 op=LOAD Jan 15 05:55:04.946000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2537 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:04.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235336266666561326566396664323366303639376635306232633165 Jan 15 05:55:05.130000 audit: BPF prog-id=91 op=LOAD Jan 15 05:55:05.131000 audit: BPF prog-id=92 op=LOAD Jan 15 05:55:05.131000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=92 op=UNLOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=93 op=LOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=94 op=LOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=94 op=UNLOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=93 op=UNLOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.132000 audit: BPF prog-id=95 op=LOAD Jan 15 05:55:05.132000 audit[2615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2595 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376338326364616362383437386235313966343566633937643465 Jan 15 05:55:05.152807 containerd[1599]: time="2026-01-15T05:55:05.152755741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034\"" Jan 15 05:55:05.157984 kubelet[2485]: E0115 05:55:05.157956 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:05.164815 containerd[1599]: time="2026-01-15T05:55:05.164415267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6070624aceef11fcaa40c8b252b8055b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d\"" Jan 15 05:55:05.166700 kubelet[2485]: E0115 05:55:05.166679 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:05.174058 containerd[1599]: time="2026-01-15T05:55:05.174016260Z" level=info msg="CreateContainer within sandbox \"52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 05:55:05.187563 containerd[1599]: time="2026-01-15T05:55:05.186799516Z" level=info msg="CreateContainer within sandbox \"b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 05:55:05.208099 containerd[1599]: time="2026-01-15T05:55:05.207733902Z" level=info msg="Container b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:55:05.230085 containerd[1599]: time="2026-01-15T05:55:05.229949048Z" level=info msg="Container ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:55:05.253536 containerd[1599]: time="2026-01-15T05:55:05.252822848Z" level=info msg="CreateContainer within sandbox \"52399039a8f139c18887fa62995b71417a0b4118ff518018ac449f6257313034\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d\"" Jan 15 05:55:05.260490 containerd[1599]: time="2026-01-15T05:55:05.259683977Z" level=info msg="StartContainer for \"b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d\"" Jan 15 05:55:05.263466 containerd[1599]: time="2026-01-15T05:55:05.262741100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60\"" Jan 15 05:55:05.265345 containerd[1599]: time="2026-01-15T05:55:05.264804895Z" level=info msg="connecting to shim b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d" address="unix:///run/containerd/s/10a811142923977a1f389573bb93ac0389a009441e78588b14d8dfba8a70ee42" protocol=ttrpc version=3 Jan 15 05:55:05.269695 kubelet[2485]: E0115 05:55:05.269610 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:05.285622 containerd[1599]: time="2026-01-15T05:55:05.285123936Z" level=info msg="CreateContainer within sandbox \"b53bffea2ef9fd23f0697f50b2c1ef06980f086fcc96bac55815ccb4a2a8cc2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3\"" Jan 15 05:55:05.286572 containerd[1599]: time="2026-01-15T05:55:05.286441530Z" level=info msg="CreateContainer within sandbox \"2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 05:55:05.288755 containerd[1599]: time="2026-01-15T05:55:05.288522806Z" level=info msg="StartContainer for \"ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3\"" Jan 15 05:55:05.291693 containerd[1599]: time="2026-01-15T05:55:05.291513857Z" level=info msg="connecting to shim ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3" address="unix:///run/containerd/s/16b442da19a157e94239aa60fa43aef6a148cbddbe3bf3b43265bd7aa875a6c0" protocol=ttrpc version=3 Jan 15 05:55:05.335831 systemd[1]: Started cri-containerd-b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d.scope - libcontainer container b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d. Jan 15 05:55:05.366013 containerd[1599]: time="2026-01-15T05:55:05.365786413Z" level=info msg="Container f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:55:05.369776 systemd[1]: Started cri-containerd-ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3.scope - libcontainer container ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3. Jan 15 05:55:05.374000 audit: BPF prog-id=96 op=LOAD Jan 15 05:55:05.375000 audit: BPF prog-id=97 op=LOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=97 op=UNLOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=98 op=LOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=99 op=LOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=99 op=UNLOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=98 op=UNLOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.375000 audit: BPF prog-id=100 op=LOAD Jan 15 05:55:05.375000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2547 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238636136643662306433636438356139653735613331636563386538 Jan 15 05:55:05.390045 containerd[1599]: time="2026-01-15T05:55:05.389523453Z" level=info msg="CreateContainer within sandbox \"2f7c82cdacb8478b519f45fc97d4ecce86778ac52794bbf13ae72211a6216d60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194\"" Jan 15 05:55:05.396504 containerd[1599]: time="2026-01-15T05:55:05.394958066Z" level=info msg="StartContainer for \"f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194\"" Jan 15 05:55:05.401979 containerd[1599]: time="2026-01-15T05:55:05.401742772Z" level=info msg="connecting to shim f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194" address="unix:///run/containerd/s/c336558d4d80a894ee70aa328985fbb462762f25c303351e4ca41ccf54914727" protocol=ttrpc version=3 Jan 15 05:55:05.438000 audit: BPF prog-id=101 op=LOAD Jan 15 05:55:05.440000 audit: BPF prog-id=102 op=LOAD Jan 15 05:55:05.440000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.441000 audit: BPF prog-id=102 op=UNLOAD Jan 15 05:55:05.441000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.444000 audit: BPF prog-id=103 op=LOAD Jan 15 05:55:05.444000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.445000 audit: BPF prog-id=104 op=LOAD Jan 15 05:55:05.445000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.445000 audit: BPF prog-id=104 op=UNLOAD Jan 15 05:55:05.445000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.445000 audit: BPF prog-id=103 op=UNLOAD Jan 15 05:55:05.445000 audit[2670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.448000 audit: BPF prog-id=105 op=LOAD Jan 15 05:55:05.448000 audit[2670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2537 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666613338613633346231623132336130336431666536333938313063 Jan 15 05:55:05.483978 systemd[1]: Started cri-containerd-f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194.scope - libcontainer container f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194. Jan 15 05:55:05.511055 containerd[1599]: time="2026-01-15T05:55:05.510971927Z" level=info msg="StartContainer for \"b8ca6d6b0d3cd85a9e75a31cec8e82d5e1cfd410cd8dad48d969011a6f91f76d\" returns successfully" Jan 15 05:55:05.561000 audit: BPF prog-id=106 op=LOAD Jan 15 05:55:05.564000 audit: BPF prog-id=107 op=LOAD Jan 15 05:55:05.564000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.566000 audit: BPF prog-id=107 op=UNLOAD Jan 15 05:55:05.566000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.568000 audit: BPF prog-id=108 op=LOAD Jan 15 05:55:05.568000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.571000 audit: BPF prog-id=109 op=LOAD Jan 15 05:55:05.571000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.571000 audit: BPF prog-id=109 op=UNLOAD Jan 15 05:55:05.571000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.571000 audit: BPF prog-id=108 op=UNLOAD Jan 15 05:55:05.571000 audit[2703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.571000 audit: BPF prog-id=110 op=LOAD Jan 15 05:55:05.571000 audit[2703]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2595 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:05.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637666134333038333234353739356165343363383330313163376636 Jan 15 05:55:05.659920 containerd[1599]: time="2026-01-15T05:55:05.658804065Z" level=info msg="StartContainer for \"ffa38a634b1b123a03d1fe639810c1354caf414dae8bf581e7d7e8fa3abe76d3\" returns successfully" Jan 15 05:55:05.763520 containerd[1599]: time="2026-01-15T05:55:05.761529988Z" level=info msg="StartContainer for \"f7fa43083245795ae43c83011c7f660b3e8157ab64a1c851ea567eaad490a194\" returns successfully" Jan 15 05:55:06.363491 kubelet[2485]: E0115 05:55:06.363021 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:06.368658 kubelet[2485]: E0115 05:55:06.368474 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:06.372869 kubelet[2485]: E0115 05:55:06.371524 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:06.372869 kubelet[2485]: E0115 05:55:06.371840 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:06.385662 kubelet[2485]: E0115 05:55:06.385632 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:06.386835 kubelet[2485]: E0115 05:55:06.386815 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:06.470524 kubelet[2485]: I0115 05:55:06.470491 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:07.384821 kubelet[2485]: E0115 05:55:07.384758 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:07.388467 kubelet[2485]: E0115 05:55:07.387656 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:07.388467 kubelet[2485]: E0115 05:55:07.386101 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:07.388467 kubelet[2485]: E0115 05:55:07.387783 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:07.391895 kubelet[2485]: E0115 05:55:07.390078 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:07.391895 kubelet[2485]: E0115 05:55:07.390162 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:08.395452 kubelet[2485]: E0115 05:55:08.395140 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:08.398058 kubelet[2485]: E0115 05:55:08.397895 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:08.399976 kubelet[2485]: E0115 05:55:08.399866 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:08.400818 kubelet[2485]: E0115 05:55:08.400651 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:09.666599 kubelet[2485]: E0115 05:55:09.665554 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:09.666599 kubelet[2485]: E0115 05:55:09.666139 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:09.838985 kubelet[2485]: E0115 05:55:09.838731 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:09.839588 kubelet[2485]: E0115 05:55:09.839077 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:11.846917 kubelet[2485]: E0115 05:55:11.840168 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:11.859559 kubelet[2485]: E0115 05:55:11.849565 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:13.347716 kubelet[2485]: E0115 05:55:13.343139 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 05:55:16.094740 kubelet[2485]: E0115 05:55:16.094492 2485 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 15 05:55:16.099847 kubelet[2485]: E0115 05:55:16.099701 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 05:55:16.243830 kubelet[2485]: E0115 05:55:16.240818 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 05:55:16.501016 kubelet[2485]: E0115 05:55:16.499665 2485 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 15 05:55:16.635551 kubelet[2485]: E0115 05:55:16.635042 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 05:55:17.083480 kubelet[2485]: E0115 05:55:17.082777 2485 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 05:55:19.711876 kubelet[2485]: E0115 05:55:19.710894 2485 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 15 05:55:19.734969 kubelet[2485]: I0115 05:55:19.734164 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:19.884817 kubelet[2485]: E0115 05:55:19.882936 2485 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:55:19.884817 kubelet[2485]: E0115 05:55:19.883559 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:22.852892 kubelet[2485]: E0115 05:55:22.837222 2485 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188ad1d1e12629c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 05:55:03.052085698 +0000 UTC m=+2.025079153,LastTimestamp:2026-01-15 05:55:03.052085698 +0000 UTC m=+2.025079153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 05:55:23.329741 kubelet[2485]: I0115 05:55:23.329517 2485 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:55:23.330097 kubelet[2485]: E0115 05:55:23.329798 2485 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 15 05:55:23.574656 kubelet[2485]: E0115 05:55:23.570815 2485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 05:55:23.724673 kubelet[2485]: I0115 05:55:23.714160 2485 apiserver.go:52] "Watching apiserver" Jan 15 05:55:23.780791 kubelet[2485]: I0115 05:55:23.779141 2485 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 05:55:23.786477 kubelet[2485]: I0115 05:55:23.784065 2485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:23.845911 kubelet[2485]: I0115 05:55:23.845869 2485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:23.861798 kubelet[2485]: E0115 05:55:23.861210 2485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:23.865548 kubelet[2485]: E0115 05:55:23.865075 2485 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:23.865548 kubelet[2485]: E0115 05:55:23.865113 2485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:23.865548 kubelet[2485]: I0115 05:55:23.865141 2485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:23.870896 kubelet[2485]: E0115 05:55:23.869764 2485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:23.870896 kubelet[2485]: I0115 05:55:23.869875 2485 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:23.873928 kubelet[2485]: E0115 05:55:23.873899 2485 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:27.175087 systemd[1]: Reload requested from client PID 2772 ('systemctl') (unit session-8.scope)... Jan 15 05:55:27.175218 systemd[1]: Reloading... Jan 15 05:55:27.651996 zram_generator::config[2814]: No configuration found. Jan 15 05:55:28.265792 systemd[1]: Reloading finished in 1089 ms. Jan 15 05:55:28.365754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:55:28.389065 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 05:55:28.390015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:55:28.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:55:28.390645 systemd[1]: kubelet.service: Consumed 9.958s CPU time, 128.5M memory peak. Jan 15 05:55:28.400653 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 15 05:55:28.400733 kernel: audit: type=1131 audit(1768456528.388:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:55:28.397743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:55:28.397000 audit: BPF prog-id=111 op=LOAD Jan 15 05:55:28.442698 kernel: audit: type=1334 audit(1768456528.397:398): prog-id=111 op=LOAD Jan 15 05:55:28.397000 audit: BPF prog-id=61 op=UNLOAD Jan 15 05:55:28.400000 audit: BPF prog-id=112 op=LOAD Jan 15 05:55:28.475715 kernel: audit: type=1334 audit(1768456528.397:399): prog-id=61 op=UNLOAD Jan 15 05:55:28.475796 kernel: audit: type=1334 audit(1768456528.400:400): prog-id=112 op=LOAD Jan 15 05:55:28.475834 kernel: audit: type=1334 audit(1768456528.400:401): prog-id=76 op=UNLOAD Jan 15 05:55:28.400000 audit: BPF prog-id=76 op=UNLOAD Jan 15 05:55:28.484807 kernel: audit: type=1334 audit(1768456528.412:402): prog-id=113 op=LOAD Jan 15 05:55:28.412000 audit: BPF prog-id=113 op=LOAD Jan 15 05:55:28.412000 audit: BPF prog-id=65 op=UNLOAD Jan 15 05:55:28.501754 kernel: audit: type=1334 audit(1768456528.412:403): prog-id=65 op=UNLOAD Jan 15 05:55:28.501852 kernel: audit: type=1334 audit(1768456528.412:404): prog-id=114 op=LOAD Jan 15 05:55:28.412000 audit: BPF prog-id=114 op=LOAD Jan 15 05:55:28.412000 audit: BPF prog-id=115 op=LOAD Jan 15 05:55:28.527433 kernel: audit: type=1334 audit(1768456528.412:405): prog-id=115 op=LOAD Jan 15 05:55:28.527522 kernel: audit: type=1334 audit(1768456528.415:406): prog-id=66 op=UNLOAD Jan 15 05:55:28.415000 audit: BPF prog-id=66 op=UNLOAD Jan 15 05:55:28.415000 audit: BPF prog-id=67 op=UNLOAD Jan 15 05:55:28.418000 audit: BPF prog-id=116 op=LOAD Jan 15 05:55:28.418000 audit: BPF prog-id=77 op=UNLOAD Jan 15 05:55:28.419000 audit: BPF prog-id=117 op=LOAD Jan 15 05:55:28.419000 audit: BPF prog-id=118 op=LOAD Jan 15 05:55:28.419000 audit: BPF prog-id=78 op=UNLOAD Jan 15 05:55:28.419000 audit: BPF prog-id=79 op=UNLOAD Jan 15 05:55:28.420000 audit: BPF prog-id=119 op=LOAD Jan 15 05:55:28.420000 audit: BPF prog-id=73 op=UNLOAD Jan 15 05:55:28.420000 audit: BPF prog-id=120 op=LOAD Jan 15 05:55:28.421000 audit: BPF prog-id=121 op=LOAD Jan 15 05:55:28.421000 audit: BPF prog-id=74 op=UNLOAD Jan 15 05:55:28.421000 audit: BPF prog-id=75 op=UNLOAD Jan 15 05:55:28.423000 audit: BPF prog-id=122 op=LOAD Jan 15 05:55:28.423000 audit: BPF prog-id=62 op=UNLOAD Jan 15 05:55:28.423000 audit: BPF prog-id=123 op=LOAD Jan 15 05:55:28.423000 audit: BPF prog-id=124 op=LOAD Jan 15 05:55:28.423000 audit: BPF prog-id=63 op=UNLOAD Jan 15 05:55:28.424000 audit: BPF prog-id=64 op=UNLOAD Jan 15 05:55:28.428000 audit: BPF prog-id=125 op=LOAD Jan 15 05:55:28.428000 audit: BPF prog-id=70 op=UNLOAD Jan 15 05:55:28.428000 audit: BPF prog-id=126 op=LOAD Jan 15 05:55:28.428000 audit: BPF prog-id=127 op=LOAD Jan 15 05:55:28.428000 audit: BPF prog-id=71 op=UNLOAD Jan 15 05:55:28.428000 audit: BPF prog-id=72 op=UNLOAD Jan 15 05:55:28.432000 audit: BPF prog-id=128 op=LOAD Jan 15 05:55:28.432000 audit: BPF prog-id=80 op=UNLOAD Jan 15 05:55:28.433000 audit: BPF prog-id=129 op=LOAD Jan 15 05:55:28.433000 audit: BPF prog-id=130 op=LOAD Jan 15 05:55:28.433000 audit: BPF prog-id=68 op=UNLOAD Jan 15 05:55:28.433000 audit: BPF prog-id=69 op=UNLOAD Jan 15 05:55:29.171694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:55:29.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:55:29.198003 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:55:29.527135 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:55:29.527135 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:55:29.527135 kubelet[2864]: I0115 05:55:29.526804 2864 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:55:29.550213 kubelet[2864]: I0115 05:55:29.549951 2864 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 15 05:55:29.550213 kubelet[2864]: I0115 05:55:29.550080 2864 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:55:29.550213 kubelet[2864]: I0115 05:55:29.550122 2864 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 15 05:55:29.550213 kubelet[2864]: I0115 05:55:29.550132 2864 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:55:29.550849 kubelet[2864]: I0115 05:55:29.550838 2864 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 05:55:29.553637 kubelet[2864]: I0115 05:55:29.553220 2864 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 15 05:55:29.569654 kubelet[2864]: I0115 05:55:29.565070 2864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:55:29.609107 kubelet[2864]: I0115 05:55:29.606102 2864 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:55:29.662059 kubelet[2864]: I0115 05:55:29.661190 2864 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 15 05:55:29.662733 kubelet[2864]: I0115 05:55:29.662191 2864 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:55:29.663564 kubelet[2864]: I0115 05:55:29.662571 2864 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:55:29.663564 kubelet[2864]: I0115 05:55:29.662812 2864 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:55:29.663564 kubelet[2864]: I0115 05:55:29.662822 2864 container_manager_linux.go:306] "Creating device plugin manager" Jan 15 05:55:29.663564 kubelet[2864]: I0115 05:55:29.662881 2864 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 15 05:55:29.685195 kubelet[2864]: I0115 05:55:29.685026 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:55:29.690596 kubelet[2864]: I0115 05:55:29.690090 2864 kubelet.go:475] "Attempting to sync node with API server" Jan 15 05:55:29.690596 kubelet[2864]: I0115 05:55:29.690636 2864 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:55:29.693992 kubelet[2864]: I0115 05:55:29.693912 2864 kubelet.go:387] "Adding apiserver pod source" Jan 15 05:55:29.693992 kubelet[2864]: I0115 05:55:29.693943 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:55:29.778139 kubelet[2864]: I0115 05:55:29.776809 2864 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:55:29.831222 kubelet[2864]: I0115 05:55:29.828850 2864 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 05:55:29.831222 kubelet[2864]: I0115 05:55:29.829088 2864 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 15 05:55:29.860720 kubelet[2864]: I0115 05:55:29.859911 2864 server.go:1262] "Started kubelet" Jan 15 05:55:29.862570 kubelet[2864]: I0115 05:55:29.861923 2864 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:55:29.862570 kubelet[2864]: I0115 05:55:29.862167 2864 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 15 05:55:29.863945 kubelet[2864]: I0115 05:55:29.862950 2864 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:55:29.863945 kubelet[2864]: I0115 05:55:29.863109 2864 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:55:29.877211 kubelet[2864]: I0115 05:55:29.877015 2864 server.go:310] "Adding debug handlers to kubelet server" Jan 15 05:55:29.879798 kubelet[2864]: I0115 05:55:29.878894 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:55:29.894632 kubelet[2864]: I0115 05:55:29.891608 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:55:29.910702 kubelet[2864]: I0115 05:55:29.909204 2864 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 15 05:55:30.018920 kubelet[2864]: I0115 05:55:30.018715 2864 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 05:55:30.031577 kubelet[2864]: I0115 05:55:30.029545 2864 reconciler.go:29] "Reconciler: start to sync state" Jan 15 05:55:30.048551 kubelet[2864]: I0115 05:55:30.047522 2864 factory.go:223] Registration of the systemd container factory successfully Jan 15 05:55:30.051595 kubelet[2864]: I0115 05:55:30.049127 2864 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:55:30.061983 kubelet[2864]: I0115 05:55:30.061107 2864 factory.go:223] Registration of the containerd container factory successfully Jan 15 05:55:30.127132 kubelet[2864]: I0115 05:55:30.125662 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 15 05:55:30.162546 kubelet[2864]: I0115 05:55:30.160223 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 15 05:55:30.162546 kubelet[2864]: I0115 05:55:30.161724 2864 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 15 05:55:30.162546 kubelet[2864]: I0115 05:55:30.161764 2864 kubelet.go:2427] "Starting kubelet main sync loop" Jan 15 05:55:30.162546 kubelet[2864]: E0115 05:55:30.161835 2864 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:55:30.262945 kubelet[2864]: E0115 05:55:30.262471 2864 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 05:55:30.273842 kubelet[2864]: I0115 05:55:30.273781 2864 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:55:30.273842 kubelet[2864]: I0115 05:55:30.273798 2864 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:55:30.273842 kubelet[2864]: I0115 05:55:30.273816 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:55:30.273967 kubelet[2864]: I0115 05:55:30.273943 2864 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 05:55:30.273967 kubelet[2864]: I0115 05:55:30.273952 2864 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 05:55:30.274004 kubelet[2864]: I0115 05:55:30.273970 2864 policy_none.go:49] "None policy: Start" Jan 15 05:55:30.274004 kubelet[2864]: I0115 05:55:30.273980 2864 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 15 05:55:30.274004 kubelet[2864]: I0115 05:55:30.273990 2864 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 15 05:55:30.275142 kubelet[2864]: I0115 05:55:30.274086 2864 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 15 05:55:30.275142 kubelet[2864]: I0115 05:55:30.274095 2864 policy_none.go:47] "Start" Jan 15 05:55:30.296193 kubelet[2864]: E0115 05:55:30.296023 2864 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 05:55:30.296975 kubelet[2864]: I0115 05:55:30.296736 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:55:30.296975 kubelet[2864]: I0115 05:55:30.296754 2864 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:55:30.299934 kubelet[2864]: I0115 05:55:30.299206 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:55:30.320761 kubelet[2864]: E0115 05:55:30.319886 2864 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:55:30.468551 kubelet[2864]: I0115 05:55:30.466093 2864 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:30.468551 kubelet[2864]: I0115 05:55:30.467927 2864 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:30.469982 kubelet[2864]: I0115 05:55:30.469903 2864 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.477491 kubelet[2864]: I0115 05:55:30.477118 2864 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:55:30.514603 kubelet[2864]: I0115 05:55:30.514087 2864 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 05:55:30.514603 kubelet[2864]: I0115 05:55:30.514173 2864 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:55:30.534988 kubelet[2864]: I0115 05:55:30.534954 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:30.538770 kubelet[2864]: I0115 05:55:30.536821 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:30.538770 kubelet[2864]: I0115 05:55:30.538559 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6070624aceef11fcaa40c8b252b8055b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6070624aceef11fcaa40c8b252b8055b\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:55:30.538770 kubelet[2864]: I0115 05:55:30.538610 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.538770 kubelet[2864]: I0115 05:55:30.538637 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:55:30.538770 kubelet[2864]: I0115 05:55:30.538650 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.538903 kubelet[2864]: I0115 05:55:30.538664 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.538903 kubelet[2864]: I0115 05:55:30.538676 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.538903 kubelet[2864]: I0115 05:55:30.538689 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:55:30.698761 kubelet[2864]: I0115 05:55:30.698087 2864 apiserver.go:52] "Watching apiserver" Jan 15 05:55:30.719930 kubelet[2864]: I0115 05:55:30.719709 2864 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 05:55:30.823610 kubelet[2864]: E0115 05:55:30.823015 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:30.823610 kubelet[2864]: E0115 05:55:30.823176 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:30.825794 kubelet[2864]: E0115 05:55:30.825672 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:30.939937 kubelet[2864]: I0115 05:55:30.938966 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.938755306 podStartE2EDuration="938.755306ms" podCreationTimestamp="2026-01-15 05:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:55:30.909835819 +0000 UTC m=+1.600751877" watchObservedRunningTime="2026-01-15 05:55:30.938755306 +0000 UTC m=+1.629671374" Jan 15 05:55:30.939937 kubelet[2864]: I0115 05:55:30.939116 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.939110369 podStartE2EDuration="939.110369ms" podCreationTimestamp="2026-01-15 05:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:55:30.936856316 +0000 UTC m=+1.627772374" watchObservedRunningTime="2026-01-15 05:55:30.939110369 +0000 UTC m=+1.630026427" Jan 15 05:55:31.038540 kubelet[2864]: I0115 05:55:31.038012 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.037986304 podStartE2EDuration="1.037986304s" podCreationTimestamp="2026-01-15 05:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:55:30.974844249 +0000 UTC m=+1.665760327" watchObservedRunningTime="2026-01-15 05:55:31.037986304 +0000 UTC m=+1.728902361" Jan 15 05:55:31.248130 kubelet[2864]: E0115 05:55:31.248039 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:31.252597 kubelet[2864]: E0115 05:55:31.249223 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:31.253170 kubelet[2864]: E0115 05:55:31.253095 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:32.252497 kubelet[2864]: E0115 05:55:32.251810 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:32.253020 kubelet[2864]: E0115 05:55:32.252898 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:32.255714 kubelet[2864]: E0115 05:55:32.255630 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:32.625799 kubelet[2864]: I0115 05:55:32.625650 2864 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 05:55:32.628007 containerd[1599]: time="2026-01-15T05:55:32.627851001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 05:55:32.629083 kubelet[2864]: I0115 05:55:32.628940 2864 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 05:55:33.256165 kubelet[2864]: E0115 05:55:33.255844 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:33.267499 kubelet[2864]: E0115 05:55:33.267052 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:33.469952 systemd[1]: Created slice kubepods-besteffort-poda49adcd0_833f_4fa9_ba30_db5ed90250c8.slice - libcontainer container kubepods-besteffort-poda49adcd0_833f_4fa9_ba30_db5ed90250c8.slice. Jan 15 05:55:33.574470 kubelet[2864]: I0115 05:55:33.573826 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a49adcd0-833f-4fa9-ba30-db5ed90250c8-kube-proxy\") pod \"kube-proxy-bdkw5\" (UID: \"a49adcd0-833f-4fa9-ba30-db5ed90250c8\") " pod="kube-system/kube-proxy-bdkw5" Jan 15 05:55:33.574470 kubelet[2864]: I0115 05:55:33.573950 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a49adcd0-833f-4fa9-ba30-db5ed90250c8-xtables-lock\") pod \"kube-proxy-bdkw5\" (UID: \"a49adcd0-833f-4fa9-ba30-db5ed90250c8\") " pod="kube-system/kube-proxy-bdkw5" Jan 15 05:55:33.574470 kubelet[2864]: I0115 05:55:33.573969 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a49adcd0-833f-4fa9-ba30-db5ed90250c8-lib-modules\") pod \"kube-proxy-bdkw5\" (UID: \"a49adcd0-833f-4fa9-ba30-db5ed90250c8\") " pod="kube-system/kube-proxy-bdkw5" Jan 15 05:55:33.574470 kubelet[2864]: I0115 05:55:33.573984 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqh8\" (UniqueName: \"kubernetes.io/projected/a49adcd0-833f-4fa9-ba30-db5ed90250c8-kube-api-access-4zqh8\") pod \"kube-proxy-bdkw5\" (UID: \"a49adcd0-833f-4fa9-ba30-db5ed90250c8\") " pod="kube-system/kube-proxy-bdkw5" Jan 15 05:55:33.794743 kubelet[2864]: E0115 05:55:33.793720 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:33.800634 containerd[1599]: time="2026-01-15T05:55:33.799767071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdkw5,Uid:a49adcd0-833f-4fa9-ba30-db5ed90250c8,Namespace:kube-system,Attempt:0,}" Jan 15 05:55:33.799865 systemd[1]: Created slice kubepods-besteffort-podf5945de7_b6b3_4e8f_a03d_3f3b251535df.slice - libcontainer container kubepods-besteffort-podf5945de7_b6b3_4e8f_a03d_3f3b251535df.slice. Jan 15 05:55:33.880992 kubelet[2864]: I0115 05:55:33.880674 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgtnf\" (UniqueName: \"kubernetes.io/projected/f5945de7-b6b3-4e8f-a03d-3f3b251535df-kube-api-access-hgtnf\") pod \"tigera-operator-65cdcdfd6d-w6q5n\" (UID: \"f5945de7-b6b3-4e8f-a03d-3f3b251535df\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-w6q5n" Jan 15 05:55:33.883673 kubelet[2864]: I0115 05:55:33.883082 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5945de7-b6b3-4e8f-a03d-3f3b251535df-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-w6q5n\" (UID: \"f5945de7-b6b3-4e8f-a03d-3f3b251535df\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-w6q5n" Jan 15 05:55:33.932921 containerd[1599]: time="2026-01-15T05:55:33.932743280Z" level=info msg="connecting to shim 4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61" address="unix:///run/containerd/s/609cb1ec6d7e9726bdd3a98941654bc056f9c8a2115ef62f91a6e535bd625c9d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:55:34.093495 systemd[1]: Started cri-containerd-4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61.scope - libcontainer container 4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61. Jan 15 05:55:34.125630 containerd[1599]: time="2026-01-15T05:55:34.125179889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-w6q5n,Uid:f5945de7-b6b3-4e8f-a03d-3f3b251535df,Namespace:tigera-operator,Attempt:0,}" Jan 15 05:55:34.176693 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 05:55:34.176836 kernel: audit: type=1334 audit(1768456534.158:439): prog-id=131 op=LOAD Jan 15 05:55:34.158000 audit: BPF prog-id=131 op=LOAD Jan 15 05:55:34.185795 kernel: audit: type=1334 audit(1768456534.160:440): prog-id=132 op=LOAD Jan 15 05:55:34.160000 audit: BPF prog-id=132 op=LOAD Jan 15 05:55:34.160000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.233683 kernel: audit: type=1300 audit(1768456534.160:440): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.233768 kernel: audit: type=1327 audit(1768456534.160:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.243751 containerd[1599]: time="2026-01-15T05:55:34.243656989Z" level=info msg="connecting to shim 6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12" address="unix:///run/containerd/s/39dec4250dd2537ee9818ffdbc4ffa94743b5aeff9d47c601ba728be86b75860" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:55:34.274503 kubelet[2864]: E0115 05:55:34.264900 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:34.160000 audit: BPF prog-id=132 op=UNLOAD Jan 15 05:55:34.160000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.322815 kernel: audit: type=1334 audit(1768456534.160:441): prog-id=132 op=UNLOAD Jan 15 05:55:34.322883 kernel: audit: type=1300 audit(1768456534.160:441): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.166000 audit: BPF prog-id=133 op=LOAD Jan 15 05:55:34.371585 kernel: audit: type=1327 audit(1768456534.160:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.371642 kernel: audit: type=1334 audit(1768456534.166:442): prog-id=133 op=LOAD Jan 15 05:55:34.371678 kernel: audit: type=1300 audit(1768456534.166:442): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.166000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.414713 containerd[1599]: time="2026-01-15T05:55:34.383766541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdkw5,Uid:a49adcd0-833f-4fa9-ba30-db5ed90250c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61\"" Jan 15 05:55:34.414831 kubelet[2864]: E0115 05:55:34.392709 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:34.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.443221 containerd[1599]: time="2026-01-15T05:55:34.442976280Z" level=info msg="CreateContainer within sandbox \"4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 05:55:34.468718 kernel: audit: type=1327 audit(1768456534.166:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.166000 audit: BPF prog-id=134 op=LOAD Jan 15 05:55:34.166000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.166000 audit: BPF prog-id=134 op=UNLOAD Jan 15 05:55:34.166000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.166000 audit: BPF prog-id=133 op=UNLOAD Jan 15 05:55:34.166000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.166000 audit: BPF prog-id=135 op=LOAD Jan 15 05:55:34.166000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2929 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464306164353266626234393037396264346630333464343761393864 Jan 15 05:55:34.491228 containerd[1599]: time="2026-01-15T05:55:34.491006583Z" level=info msg="Container 7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:55:34.500834 systemd[1]: Started cri-containerd-6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12.scope - libcontainer container 6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12. Jan 15 05:55:34.541951 containerd[1599]: time="2026-01-15T05:55:34.541794541Z" level=info msg="CreateContainer within sandbox \"4d0ad52fbb49079bd4f034d47a98dff53f68e4d8e2f01dd8a629d2761d684f61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11\"" Jan 15 05:55:34.546636 containerd[1599]: time="2026-01-15T05:55:34.546120616Z" level=info msg="StartContainer for \"7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11\"" Jan 15 05:55:34.556020 containerd[1599]: time="2026-01-15T05:55:34.555985511Z" level=info msg="connecting to shim 7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11" address="unix:///run/containerd/s/609cb1ec6d7e9726bdd3a98941654bc056f9c8a2115ef62f91a6e535bd625c9d" protocol=ttrpc version=3 Jan 15 05:55:34.570000 audit: BPF prog-id=136 op=LOAD Jan 15 05:55:34.573000 audit: BPF prog-id=137 op=LOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.573000 audit: BPF prog-id=137 op=UNLOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.573000 audit: BPF prog-id=138 op=LOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.573000 audit: BPF prog-id=139 op=LOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.573000 audit: BPF prog-id=139 op=UNLOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.573000 audit: BPF prog-id=138 op=UNLOAD Jan 15 05:55:34.573000 audit[2982]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.574000 audit: BPF prog-id=140 op=LOAD Jan 15 05:55:34.574000 audit[2982]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2969 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635313735313164363666393863376361633465633339646639666338 Jan 15 05:55:34.627026 systemd[1]: Started cri-containerd-7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11.scope - libcontainer container 7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11. Jan 15 05:55:34.763548 containerd[1599]: time="2026-01-15T05:55:34.760996637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-w6q5n,Uid:f5945de7-b6b3-4e8f-a03d-3f3b251535df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12\"" Jan 15 05:55:34.778842 containerd[1599]: time="2026-01-15T05:55:34.775874778Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 05:55:34.813000 audit: BPF prog-id=141 op=LOAD Jan 15 05:55:34.813000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2929 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761303862656238396132336266383835616532313231643263636231 Jan 15 05:55:34.814000 audit: BPF prog-id=142 op=LOAD Jan 15 05:55:34.814000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2929 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761303862656238396132336266383835616532313231643263636231 Jan 15 05:55:34.814000 audit: BPF prog-id=142 op=UNLOAD Jan 15 05:55:34.814000 audit[3006]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761303862656238396132336266383835616532313231643263636231 Jan 15 05:55:34.814000 audit: BPF prog-id=141 op=UNLOAD Jan 15 05:55:34.814000 audit[3006]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761303862656238396132336266383835616532313231643263636231 Jan 15 05:55:34.814000 audit: BPF prog-id=143 op=LOAD Jan 15 05:55:34.814000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2929 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:34.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761303862656238396132336266383835616532313231643263636231 Jan 15 05:55:34.922832 containerd[1599]: time="2026-01-15T05:55:34.922796199Z" level=info msg="StartContainer for \"7a08beb89a23bf885ae2121d2ccb19b3cf8613f4b4c6d7cf93886d039299da11\" returns successfully" Jan 15 05:55:35.292228 kubelet[2864]: E0115 05:55:35.292023 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:35.293819 kubelet[2864]: E0115 05:55:35.292872 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:35.851737 kubelet[2864]: E0115 05:55:35.850816 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:35.907205 kubelet[2864]: I0115 05:55:35.906908 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdkw5" podStartSLOduration=2.906885808 podStartE2EDuration="2.906885808s" podCreationTimestamp="2026-01-15 05:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:55:35.341810786 +0000 UTC m=+6.032726844" watchObservedRunningTime="2026-01-15 05:55:35.906885808 +0000 UTC m=+6.597801866" Jan 15 05:55:36.222000 audit[3079]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:36.222000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7d1663d0 a2=0 a3=7ffc7d1663bc items=0 ppid=3019 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:55:36.224000 audit[3080]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.224000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd191a8500 a2=0 a3=7ffd191a84ec items=0 ppid=3019 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:55:36.241000 audit[3084]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:36.241000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcddc7e100 a2=0 a3=7ffcddc7e0ec items=0 ppid=3019 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:55:36.251000 audit[3085]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.251000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1262b1a0 a2=0 a3=7ffc1262b18c items=0 ppid=3019 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:55:36.260000 audit[3089]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.260000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb36f6970 a2=0 a3=7ffcb36f695c items=0 ppid=3019 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:55:36.267000 audit[3088]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:36.267000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbfa854a0 a2=0 a3=7ffdbfa8548c items=0 ppid=3019 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:55:36.298025 kubelet[2864]: E0115 05:55:36.297735 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:55:36.349000 audit[3090]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.349000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd22187740 a2=0 a3=7ffd2218772c items=0 ppid=3019 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:55:36.376000 audit[3092]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.376000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc10b04120 a2=0 a3=7ffc10b0410c items=0 ppid=3019 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 15 05:55:36.413000 audit[3095]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.413000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb4c2a680 a2=0 a3=7ffeb4c2a66c items=0 ppid=3019 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 15 05:55:36.423000 audit[3096]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.423000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe85bf99b0 a2=0 a3=7ffe85bf999c items=0 ppid=3019 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:55:36.450000 audit[3098]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.450000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb87b5130 a2=0 a3=7ffeb87b511c items=0 ppid=3019 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:55:36.461000 audit[3099]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.461000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec42679a0 a2=0 a3=7ffec426798c items=0 ppid=3019 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:55:36.482000 audit[3101]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.482000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe59f39ef0 a2=0 a3=7ffe59f39edc items=0 ppid=3019 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:36.540000 audit[3104]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.540000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff08ea9580 a2=0 a3=7fff08ea956c items=0 ppid=3019 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:36.554000 audit[3105]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.554000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9ce7a280 a2=0 a3=7ffe9ce7a26c items=0 ppid=3019 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:55:36.579000 audit[3107]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.579000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda23c8a80 a2=0 a3=7ffda23c8a6c items=0 ppid=3019 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:55:36.590000 audit[3108]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.590000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe264a6670 a2=0 a3=7ffe264a665c items=0 ppid=3019 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:55:36.619000 audit[3110]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.619000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe520e4d30 a2=0 a3=7ffe520e4d1c items=0 ppid=3019 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 15 05:55:36.660000 audit[3113]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.660000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1c505660 a2=0 a3=7ffd1c50564c items=0 ppid=3019 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 15 05:55:36.695000 audit[3116]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.695000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffff2afd00 a2=0 a3=7fffff2afcec items=0 ppid=3019 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 15 05:55:36.705000 audit[3117]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.705000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb26a3af0 a2=0 a3=7ffeb26a3adc items=0 ppid=3019 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:55:36.733000 audit[3119]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.733000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffcbf2a420 a2=0 a3=7fffcbf2a40c items=0 ppid=3019 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:36.769000 audit[3122]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.769000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd20385590 a2=0 a3=7ffd2038557c items=0 ppid=3019 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:36.776000 audit[3123]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.776000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdd800e30 a2=0 a3=7fffdd800e1c items=0 ppid=3019 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:55:36.796000 audit[3125]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:55:36.796000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee341f1e0 a2=0 a3=7ffee341f1cc items=0 ppid=3019 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:55:36.964000 audit[3131]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:55:36.964000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd94c0ea60 a2=0 a3=7ffd94c0ea4c items=0 ppid=3019 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:55:36.996000 audit[3131]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:55:36.996000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd94c0ea60 a2=0 a3=7ffd94c0ea4c items=0 ppid=3019 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:36.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:55:37.010000 audit[3136]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.010000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdfd93fc80 a2=0 a3=7ffdfd93fc6c items=0 ppid=3019 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:55:37.033000 audit[3138]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.033000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff69dbc690 a2=0 a3=7fff69dbc67c items=0 ppid=3019 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 15 05:55:37.063000 audit[3141]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.063000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc49d7ed60 a2=0 a3=7ffc49d7ed4c items=0 ppid=3019 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 15 05:55:37.075000 audit[3142]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.075000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc2b9f6d0 a2=0 a3=7ffcc2b9f6bc items=0 ppid=3019 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:55:37.096000 audit[3144]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.096000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff02cdc7e0 a2=0 a3=7fff02cdc7cc items=0 ppid=3019 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:55:37.107000 audit[3145]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.107000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed7e82cc0 a2=0 a3=7ffed7e82cac items=0 ppid=3019 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:55:37.137000 audit[3147]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.137000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc37daf310 a2=0 a3=7ffc37daf2fc items=0 ppid=3019 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:37.171000 audit[3150]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.171000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc61e76c30 a2=0 a3=7ffc61e76c1c items=0 ppid=3019 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:37.186000 audit[3151]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.186000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd44459880 a2=0 a3=7ffd4445986c items=0 ppid=3019 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:55:37.222000 audit[3153]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.222000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe4031e730 a2=0 a3=7ffe4031e71c items=0 ppid=3019 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:55:37.238000 audit[3154]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.238000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc8075310 a2=0 a3=7fffc80752fc items=0 ppid=3019 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:55:37.268000 audit[3156]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.268000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff8a0d4d0 a2=0 a3=7ffff8a0d4bc items=0 ppid=3019 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 15 05:55:37.309000 audit[3159]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.309000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0db47040 a2=0 a3=7ffe0db4702c items=0 ppid=3019 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 15 05:55:37.347000 audit[3162]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.347000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddbebe8a0 a2=0 a3=7ffddbebe88c items=0 ppid=3019 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 15 05:55:37.360000 audit[3163]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.360000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe76e5b3e0 a2=0 a3=7ffe76e5b3cc items=0 ppid=3019 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:55:37.384000 audit[3165]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.384000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcbd36f580 a2=0 a3=7ffcbd36f56c items=0 ppid=3019 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:37.419000 audit[3168]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.419000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffce52091c0 a2=0 a3=7ffce52091ac items=0 ppid=3019 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:55:37.429000 audit[3169]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.429000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe744a1170 a2=0 a3=7ffe744a115c items=0 ppid=3019 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:55:37.452000 audit[3171]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.452000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff172dd9c0 a2=0 a3=7fff172dd9ac items=0 ppid=3019 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:55:37.463000 audit[3172]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.463000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1bd233d0 a2=0 a3=7ffc1bd233bc items=0 ppid=3019 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:55:37.487000 audit[3174]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.487000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffa7d8c7d0 a2=0 a3=7fffa7d8c7bc items=0 ppid=3019 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:55:37.523000 audit[3177]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:55:37.523000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef5cd1e40 a2=0 a3=7ffef5cd1e2c items=0 ppid=3019 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:55:37.579000 audit[3179]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:55:37.579000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd0d2a5730 a2=0 a3=7ffd0d2a571c items=0 ppid=3019 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.579000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:55:37.580000 audit[3179]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:55:37.580000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd0d2a5730 a2=0 a3=7ffd0d2a571c items=0 ppid=3019 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:37.580000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:55:43.462132 kubelet[2864]: E0115 05:55:43.461119 2864 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.16s" Jan 15 05:55:44.537060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602704829.mount: Deactivated successfully. Jan 15 05:55:49.294736 containerd[1599]: time="2026-01-15T05:55:49.294153547Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:55:49.296901 containerd[1599]: time="2026-01-15T05:55:49.296866807Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558945" Jan 15 05:55:49.301901 containerd[1599]: time="2026-01-15T05:55:49.300965568Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:55:49.314717 containerd[1599]: time="2026-01-15T05:55:49.314665128Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:55:49.317068 containerd[1599]: time="2026-01-15T05:55:49.317024362Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 14.541106252s" Jan 15 05:55:49.317792 containerd[1599]: time="2026-01-15T05:55:49.317205790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 05:55:49.346044 containerd[1599]: time="2026-01-15T05:55:49.345998140Z" level=info msg="CreateContainer within sandbox \"6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 05:55:49.383903 containerd[1599]: time="2026-01-15T05:55:49.383685819Z" level=info msg="Container 69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:55:49.401835 containerd[1599]: time="2026-01-15T05:55:49.401791295Z" level=info msg="CreateContainer within sandbox \"6517511d66f98c7cac4ec39df9fc8864341b6c3413a53eb9fd696e575dee7c12\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a\"" Jan 15 05:55:49.407718 containerd[1599]: time="2026-01-15T05:55:49.405987513Z" level=info msg="StartContainer for \"69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a\"" Jan 15 05:55:49.415019 containerd[1599]: time="2026-01-15T05:55:49.412001387Z" level=info msg="connecting to shim 69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a" address="unix:///run/containerd/s/39dec4250dd2537ee9818ffdbc4ffa94743b5aeff9d47c601ba728be86b75860" protocol=ttrpc version=3 Jan 15 05:55:49.570175 systemd[1]: Started cri-containerd-69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a.scope - libcontainer container 69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a. Jan 15 05:55:49.753869 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 15 05:55:49.762835 kernel: audit: type=1334 audit(1768456549.728:511): prog-id=144 op=LOAD Jan 15 05:55:49.762882 kernel: audit: type=1334 audit(1768456549.732:512): prog-id=145 op=LOAD Jan 15 05:55:49.728000 audit: BPF prog-id=144 op=LOAD Jan 15 05:55:49.732000 audit: BPF prog-id=145 op=LOAD Jan 15 05:55:49.767438 kernel: audit: type=1300 audit(1768456549.732:512): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.732000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.817628 kernel: audit: type=1327 audit(1768456549.732:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.900838 kernel: audit: type=1334 audit(1768456549.732:513): prog-id=145 op=UNLOAD Jan 15 05:55:49.901038 kernel: audit: type=1300 audit(1768456549.732:513): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.732000 audit: BPF prog-id=145 op=UNLOAD Jan 15 05:55:49.732000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:50.024889 kernel: audit: type=1327 audit(1768456549.732:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:50.026617 kernel: audit: type=1334 audit(1768456549.735:514): prog-id=146 op=LOAD Jan 15 05:55:50.026694 kernel: audit: type=1300 audit(1768456549.735:514): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: BPF prog-id=146 op=LOAD Jan 15 05:55:49.735000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:50.113591 kernel: audit: type=1327 audit(1768456549.735:514): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.735000 audit: BPF prog-id=147 op=LOAD Jan 15 05:55:49.735000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.735000 audit: BPF prog-id=147 op=UNLOAD Jan 15 05:55:49.735000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.735000 audit: BPF prog-id=146 op=UNLOAD Jan 15 05:55:49.735000 audit[3190]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:49.735000 audit: BPF prog-id=148 op=LOAD Jan 15 05:55:49.735000 audit[3190]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2969 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:55:49.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639333431343836653430316639633734643635313730343434303833 Jan 15 05:55:50.142689 containerd[1599]: time="2026-01-15T05:55:50.141646298Z" level=info msg="StartContainer for \"69341486e401f9c74d651704440839edc0b9d4946a537efa61ed4ed14e56673a\" returns successfully" Jan 15 05:55:50.796703 kubelet[2864]: I0115 05:55:50.795595 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-w6q5n" podStartSLOduration=3.24433083 podStartE2EDuration="17.795210326s" podCreationTimestamp="2026-01-15 05:55:33 +0000 UTC" firstStartedPulling="2026-01-15 05:55:34.774724764 +0000 UTC m=+5.465640822" lastFinishedPulling="2026-01-15 05:55:49.32560426 +0000 UTC m=+20.016520318" observedRunningTime="2026-01-15 05:55:50.793990756 +0000 UTC m=+21.484906834" watchObservedRunningTime="2026-01-15 05:55:50.795210326 +0000 UTC m=+21.486126404" Jan 15 05:56:02.497794 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 15 05:56:02.533226 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 15 05:56:02.533993 kernel: audit: type=1106 audit(1768456562.496:519): pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:56:02.496000 audit[1817]: USER_END pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:56:02.534177 sshd[1816]: Connection closed by 10.0.0.1 port 38254 Jan 15 05:56:02.528849 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 15 05:56:02.496000 audit[1817]: CRED_DISP pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:56:02.575144 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:38254.service: Deactivated successfully. Jan 15 05:56:02.602829 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 05:56:02.603906 systemd[1]: session-8.scope: Consumed 26.831s CPU time, 221.4M memory peak. Jan 15 05:56:02.624802 kernel: audit: type=1104 audit(1768456562.496:520): pid=1817 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:56:02.624870 kernel: audit: type=1106 audit(1768456562.566:521): pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:56:02.566000 audit[1812]: USER_END pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:56:02.623695 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Jan 15 05:56:02.632907 systemd-logind[1584]: Removed session 8. Jan 15 05:56:02.566000 audit[1812]: CRED_DISP pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:56:02.752213 kernel: audit: type=1104 audit(1768456562.566:522): pid=1812 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:56:02.756769 kernel: audit: type=1131 audit(1768456562.574:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.115:22-10.0.0.1:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:56:02.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.115:22-10.0.0.1:38254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:56:04.420000 audit[3286]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:04.470516 kernel: audit: type=1325 audit(1768456564.420:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:04.420000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd6ec5fdb0 a2=0 a3=7ffd6ec5fd9c items=0 ppid=3019 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:04.543030 kernel: audit: type=1300 audit(1768456564.420:524): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd6ec5fdb0 a2=0 a3=7ffd6ec5fd9c items=0 ppid=3019 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:04.543173 kernel: audit: type=1327 audit(1768456564.420:524): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:04.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:04.470000 audit[3286]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:04.613156 kernel: audit: type=1325 audit(1768456564.470:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:04.613542 kernel: audit: type=1300 audit(1768456564.470:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6ec5fdb0 a2=0 a3=0 items=0 ppid=3019 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:04.470000 audit[3286]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6ec5fdb0 a2=0 a3=0 items=0 ppid=3019 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:04.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:05.684000 audit[3288]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:05.684000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd44e93f00 a2=0 a3=7ffd44e93eec items=0 ppid=3019 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:05.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:05.691000 audit[3288]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:05.691000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd44e93f00 a2=0 a3=0 items=0 ppid=3019 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:05.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:17.148534 kubelet[2864]: E0115 05:56:17.148067 2864 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.578s" Jan 15 05:56:19.417886 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 05:56:19.418207 kernel: audit: type=1325 audit(1768456579.397:528): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:19.397000 audit[3292]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:19.397000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd6e029880 a2=0 a3=7ffd6e02986c items=0 ppid=3019 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:19.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:19.561490 kernel: audit: type=1300 audit(1768456579.397:528): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd6e029880 a2=0 a3=7ffd6e02986c items=0 ppid=3019 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:19.561632 kernel: audit: type=1327 audit(1768456579.397:528): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:19.562088 kernel: audit: type=1325 audit(1768456579.478:529): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:19.478000 audit[3292]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:19.655605 kernel: audit: type=1300 audit(1768456579.478:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6e029880 a2=0 a3=0 items=0 ppid=3019 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:19.478000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6e029880 a2=0 a3=0 items=0 ppid=3019 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:19.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:19.680536 kernel: audit: type=1327 audit(1768456579.478:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:20.835000 audit[3294]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:20.896412 kernel: audit: type=1325 audit(1768456580.835:530): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:20.896584 kernel: audit: type=1300 audit(1768456580.835:530): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd942027b0 a2=0 a3=7ffd9420279c items=0 ppid=3019 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:20.835000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd942027b0 a2=0 a3=7ffd9420279c items=0 ppid=3019 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:20.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:20.921051 kernel: audit: type=1327 audit(1768456580.835:530): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:20.897000 audit[3294]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:20.942381 kernel: audit: type=1325 audit(1768456580.897:531): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:20.897000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd942027b0 a2=0 a3=0 items=0 ppid=3019 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:20.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:23.370579 systemd[1]: Created slice kubepods-besteffort-podde316cb9_676f_42bf_86d8_f2d208cd5404.slice - libcontainer container kubepods-besteffort-podde316cb9_676f_42bf_86d8_f2d208cd5404.slice. Jan 15 05:56:23.433000 audit[3296]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:23.433000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce60ad050 a2=0 a3=7ffce60ad03c items=0 ppid=3019 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:23.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:23.442000 audit[3296]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:23.442000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce60ad050 a2=0 a3=0 items=0 ppid=3019 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:23.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:23.485639 kubelet[2864]: I0115 05:56:23.485489 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqzg\" (UniqueName: \"kubernetes.io/projected/de316cb9-676f-42bf-86d8-f2d208cd5404-kube-api-access-6mqzg\") pod \"calico-typha-5c9d86d6c6-hdg5c\" (UID: \"de316cb9-676f-42bf-86d8-f2d208cd5404\") " pod="calico-system/calico-typha-5c9d86d6c6-hdg5c" Jan 15 05:56:23.486699 kubelet[2864]: I0115 05:56:23.485619 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de316cb9-676f-42bf-86d8-f2d208cd5404-tigera-ca-bundle\") pod \"calico-typha-5c9d86d6c6-hdg5c\" (UID: \"de316cb9-676f-42bf-86d8-f2d208cd5404\") " pod="calico-system/calico-typha-5c9d86d6c6-hdg5c" Jan 15 05:56:23.486699 kubelet[2864]: I0115 05:56:23.485825 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de316cb9-676f-42bf-86d8-f2d208cd5404-typha-certs\") pod \"calico-typha-5c9d86d6c6-hdg5c\" (UID: \"de316cb9-676f-42bf-86d8-f2d208cd5404\") " pod="calico-system/calico-typha-5c9d86d6c6-hdg5c" Jan 15 05:56:23.582017 systemd[1]: Created slice kubepods-besteffort-pod767273fb_720d_447b_b848_3374e0b22308.slice - libcontainer container kubepods-besteffort-pod767273fb_720d_447b_b848_3374e0b22308.slice. Jan 15 05:56:23.634045 kubelet[2864]: I0115 05:56:23.633439 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-xtables-lock\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636434 kubelet[2864]: I0115 05:56:23.636030 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/767273fb-720d-447b-b848-3374e0b22308-node-certs\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636434 kubelet[2864]: I0115 05:56:23.636062 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-policysync\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636434 kubelet[2864]: I0115 05:56:23.636081 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6z2l\" (UniqueName: \"kubernetes.io/projected/767273fb-720d-447b-b848-3374e0b22308-kube-api-access-p6z2l\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636434 kubelet[2864]: I0115 05:56:23.636101 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-lib-modules\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636434 kubelet[2864]: I0115 05:56:23.636118 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-var-lib-calico\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636610 kubelet[2864]: I0115 05:56:23.636135 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767273fb-720d-447b-b848-3374e0b22308-tigera-ca-bundle\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.636610 kubelet[2864]: I0115 05:56:23.636151 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-var-run-calico\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.639592 kubelet[2864]: I0115 05:56:23.639560 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-cni-log-dir\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.639858 kubelet[2864]: I0115 05:56:23.639670 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-cni-net-dir\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.639953 kubelet[2864]: I0115 05:56:23.639939 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-flexvol-driver-host\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.640048 kubelet[2864]: I0115 05:56:23.640035 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/767273fb-720d-447b-b848-3374e0b22308-cni-bin-dir\") pod \"calico-node-jmr9t\" (UID: \"767273fb-720d-447b-b848-3374e0b22308\") " pod="calico-system/calico-node-jmr9t" Jan 15 05:56:23.764131 kubelet[2864]: E0115 05:56:23.764004 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.764131 kubelet[2864]: W0115 05:56:23.764109 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.764470 kubelet[2864]: E0115 05:56:23.764216 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.769584 kubelet[2864]: E0115 05:56:23.769178 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.770467 kubelet[2864]: W0115 05:56:23.770429 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.770467 kubelet[2864]: E0115 05:56:23.770459 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.781498 kubelet[2864]: E0115 05:56:23.781360 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.781498 kubelet[2864]: W0115 05:56:23.781442 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.781498 kubelet[2864]: E0115 05:56:23.781458 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.804549 kubelet[2864]: E0115 05:56:23.803468 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.804549 kubelet[2864]: W0115 05:56:23.803496 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.804549 kubelet[2864]: E0115 05:56:23.803523 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.870286 kubelet[2864]: E0115 05:56:23.869858 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:23.934041 kubelet[2864]: E0115 05:56:23.933483 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.934041 kubelet[2864]: W0115 05:56:23.933632 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.934041 kubelet[2864]: E0115 05:56:23.933657 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.936544 kubelet[2864]: E0115 05:56:23.936209 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.936544 kubelet[2864]: W0115 05:56:23.936465 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.936544 kubelet[2864]: E0115 05:56:23.936483 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.936831 kubelet[2864]: E0115 05:56:23.936693 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.936831 kubelet[2864]: W0115 05:56:23.936807 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.936831 kubelet[2864]: E0115 05:56:23.936820 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.939576 kubelet[2864]: E0115 05:56:23.939159 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.939576 kubelet[2864]: W0115 05:56:23.939501 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.939576 kubelet[2864]: E0115 05:56:23.939514 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.941512 kubelet[2864]: E0115 05:56:23.941189 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.941590 kubelet[2864]: W0115 05:56:23.941547 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.941590 kubelet[2864]: E0115 05:56:23.941564 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.945511 kubelet[2864]: E0115 05:56:23.945407 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.945511 kubelet[2864]: W0115 05:56:23.945506 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.945626 kubelet[2864]: E0115 05:56:23.945521 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.948886 kubelet[2864]: E0115 05:56:23.948593 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.948886 kubelet[2864]: W0115 05:56:23.948688 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.948886 kubelet[2864]: E0115 05:56:23.948791 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.949880 kubelet[2864]: E0115 05:56:23.949609 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.949880 kubelet[2864]: W0115 05:56:23.949797 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.949880 kubelet[2864]: E0115 05:56:23.949814 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.954800 kubelet[2864]: E0115 05:56:23.954205 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.954800 kubelet[2864]: W0115 05:56:23.954219 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.954800 kubelet[2864]: E0115 05:56:23.954499 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.954800 kubelet[2864]: I0115 05:56:23.954528 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94de96e0-d8e2-4380-a60f-000b8e6b1786-registration-dir\") pod \"csi-node-driver-glvpn\" (UID: \"94de96e0-d8e2-4380-a60f-000b8e6b1786\") " pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:23.959929 kubelet[2864]: E0115 05:56:23.958153 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.959929 kubelet[2864]: W0115 05:56:23.958519 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.959929 kubelet[2864]: E0115 05:56:23.958539 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.960885 kubelet[2864]: E0115 05:56:23.960199 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.960885 kubelet[2864]: W0115 05:56:23.960212 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.960885 kubelet[2864]: E0115 05:56:23.960223 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.961973 kubelet[2864]: I0115 05:56:23.961875 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94de96e0-d8e2-4380-a60f-000b8e6b1786-kubelet-dir\") pod \"csi-node-driver-glvpn\" (UID: \"94de96e0-d8e2-4380-a60f-000b8e6b1786\") " pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:23.962995 kubelet[2864]: E0115 05:56:23.962867 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.962995 kubelet[2864]: W0115 05:56:23.962979 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.963100 kubelet[2864]: E0115 05:56:23.962997 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.964548 kubelet[2864]: E0115 05:56:23.964384 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.964548 kubelet[2864]: W0115 05:56:23.964479 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.964548 kubelet[2864]: E0115 05:56:23.964491 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.966174 kubelet[2864]: E0115 05:56:23.966062 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:23.967651 containerd[1599]: time="2026-01-15T05:56:23.967189852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jmr9t,Uid:767273fb-720d-447b-b848-3374e0b22308,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:23.968224 kubelet[2864]: E0115 05:56:23.968098 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.968224 kubelet[2864]: W0115 05:56:23.968109 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.968224 kubelet[2864]: E0115 05:56:23.968120 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.969942 kubelet[2864]: E0115 05:56:23.969837 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.969942 kubelet[2864]: W0115 05:56:23.969932 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.969942 kubelet[2864]: E0115 05:56:23.969944 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.971005 kubelet[2864]: E0115 05:56:23.970853 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.971005 kubelet[2864]: W0115 05:56:23.970872 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.971005 kubelet[2864]: E0115 05:56:23.970886 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.972043 kubelet[2864]: E0115 05:56:23.972024 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.972148 kubelet[2864]: W0115 05:56:23.972129 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.972495 kubelet[2864]: E0115 05:56:23.972473 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.978866 kubelet[2864]: E0115 05:56:23.978841 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.978986 kubelet[2864]: W0115 05:56:23.978965 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.979088 kubelet[2864]: E0115 05:56:23.979069 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.983166 kubelet[2864]: E0115 05:56:23.982816 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.983166 kubelet[2864]: W0115 05:56:23.982835 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.983166 kubelet[2864]: E0115 05:56:23.982848 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.986075 kubelet[2864]: E0115 05:56:23.986057 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.986166 kubelet[2864]: W0115 05:56:23.986149 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.986550 kubelet[2864]: E0115 05:56:23.986222 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.987200 kubelet[2864]: E0115 05:56:23.987182 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.988679 kubelet[2864]: W0115 05:56:23.988660 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.988929 kubelet[2864]: E0115 05:56:23.988850 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.989429 kubelet[2864]: E0115 05:56:23.989409 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.990328 kubelet[2864]: W0115 05:56:23.989868 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.990328 kubelet[2864]: E0115 05:56:23.989891 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.994120 kubelet[2864]: E0115 05:56:23.994102 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.994212 kubelet[2864]: W0115 05:56:23.994196 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.994618 kubelet[2864]: E0115 05:56:23.994597 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.997418 kubelet[2864]: E0115 05:56:23.997186 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:23.997418 kubelet[2864]: W0115 05:56:23.997203 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:23.997418 kubelet[2864]: E0115 05:56:23.997218 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:23.998676 kubelet[2864]: E0115 05:56:23.998658 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.000643 kubelet[2864]: W0115 05:56:24.000414 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.000643 kubelet[2864]: E0115 05:56:24.000439 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.003425 kubelet[2864]: E0115 05:56:24.002922 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:24.005218 kubelet[2864]: E0115 05:56:24.005028 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.005218 kubelet[2864]: W0115 05:56:24.005044 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.005218 kubelet[2864]: E0115 05:56:24.005057 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.006686 containerd[1599]: time="2026-01-15T05:56:24.006567836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9d86d6c6-hdg5c,Uid:de316cb9-676f-42bf-86d8-f2d208cd5404,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:24.070461 kubelet[2864]: E0115 05:56:24.068488 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.070461 kubelet[2864]: W0115 05:56:24.068526 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.070461 kubelet[2864]: E0115 05:56:24.068557 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.070461 kubelet[2864]: I0115 05:56:24.068603 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/94de96e0-d8e2-4380-a60f-000b8e6b1786-varrun\") pod \"csi-node-driver-glvpn\" (UID: \"94de96e0-d8e2-4380-a60f-000b8e6b1786\") " pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:24.072573 kubelet[2864]: E0115 05:56:24.072546 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.074450 kubelet[2864]: W0115 05:56:24.074423 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.074564 kubelet[2864]: E0115 05:56:24.074536 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.076195 kubelet[2864]: I0115 05:56:24.076167 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94de96e0-d8e2-4380-a60f-000b8e6b1786-socket-dir\") pod \"csi-node-driver-glvpn\" (UID: \"94de96e0-d8e2-4380-a60f-000b8e6b1786\") " pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:24.079222 kubelet[2864]: E0115 05:56:24.078472 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.079222 kubelet[2864]: W0115 05:56:24.078493 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.079222 kubelet[2864]: E0115 05:56:24.078513 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.083878 kubelet[2864]: E0115 05:56:24.082690 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.083878 kubelet[2864]: W0115 05:56:24.082807 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.083878 kubelet[2864]: E0115 05:56:24.082830 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.086929 kubelet[2864]: E0115 05:56:24.086571 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.086929 kubelet[2864]: W0115 05:56:24.086587 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.086929 kubelet[2864]: E0115 05:56:24.086601 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.088500 kubelet[2864]: I0115 05:56:24.088011 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcx2\" (UniqueName: \"kubernetes.io/projected/94de96e0-d8e2-4380-a60f-000b8e6b1786-kube-api-access-4fcx2\") pod \"csi-node-driver-glvpn\" (UID: \"94de96e0-d8e2-4380-a60f-000b8e6b1786\") " pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:24.088676 kubelet[2864]: E0115 05:56:24.088657 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.088884 kubelet[2864]: W0115 05:56:24.088863 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.088973 kubelet[2864]: E0115 05:56:24.088958 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.095446 kubelet[2864]: E0115 05:56:24.094564 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.095446 kubelet[2864]: W0115 05:56:24.094776 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.095446 kubelet[2864]: E0115 05:56:24.094796 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.101482 kubelet[2864]: E0115 05:56:24.101462 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.101606 kubelet[2864]: W0115 05:56:24.101586 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.101690 kubelet[2864]: E0115 05:56:24.101676 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.105964 kubelet[2864]: E0115 05:56:24.105624 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.105964 kubelet[2864]: W0115 05:56:24.105640 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.105964 kubelet[2864]: E0115 05:56:24.105654 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.116639 kubelet[2864]: E0115 05:56:24.115043 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.116639 kubelet[2864]: W0115 05:56:24.115067 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.116639 kubelet[2864]: E0115 05:56:24.115088 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.119906 kubelet[2864]: E0115 05:56:24.118931 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.119906 kubelet[2864]: W0115 05:56:24.118945 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.119906 kubelet[2864]: E0115 05:56:24.118960 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.120519 kubelet[2864]: E0115 05:56:24.120137 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.120519 kubelet[2864]: W0115 05:56:24.120157 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.120519 kubelet[2864]: E0115 05:56:24.120174 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.123379 kubelet[2864]: E0115 05:56:24.123172 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.123379 kubelet[2864]: W0115 05:56:24.123193 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.123379 kubelet[2864]: E0115 05:56:24.123208 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.125400 kubelet[2864]: E0115 05:56:24.125091 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.125400 kubelet[2864]: W0115 05:56:24.125109 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.125400 kubelet[2864]: E0115 05:56:24.125123 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.129653 kubelet[2864]: E0115 05:56:24.129067 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.129653 kubelet[2864]: W0115 05:56:24.129087 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.129653 kubelet[2864]: E0115 05:56:24.129105 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.130890 kubelet[2864]: E0115 05:56:24.130874 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.132415 kubelet[2864]: W0115 05:56:24.132392 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.132519 kubelet[2864]: E0115 05:56:24.132502 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.136412 kubelet[2864]: E0115 05:56:24.134870 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.136412 kubelet[2864]: W0115 05:56:24.134888 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.136412 kubelet[2864]: E0115 05:56:24.134902 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.139507 kubelet[2864]: E0115 05:56:24.138944 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.139507 kubelet[2864]: W0115 05:56:24.138958 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.139507 kubelet[2864]: E0115 05:56:24.138970 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.141476 kubelet[2864]: E0115 05:56:24.141456 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.141582 kubelet[2864]: W0115 05:56:24.141564 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.142585 kubelet[2864]: E0115 05:56:24.142535 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.167835 containerd[1599]: time="2026-01-15T05:56:24.167781093Z" level=info msg="connecting to shim 4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d" address="unix:///run/containerd/s/e26c7df083bb5e1141bdaedc87ce7dd4337a3c204cd6e7728730a19d99271cbb" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:56:24.221488 kubelet[2864]: E0115 05:56:24.221195 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.221666 kubelet[2864]: W0115 05:56:24.221640 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.221862 kubelet[2864]: E0115 05:56:24.221840 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.223526 kubelet[2864]: E0115 05:56:24.223505 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.223628 kubelet[2864]: W0115 05:56:24.223610 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.226167 kubelet[2864]: E0115 05:56:24.223695 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.228057 kubelet[2864]: E0115 05:56:24.227889 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.228057 kubelet[2864]: W0115 05:56:24.227908 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.228057 kubelet[2864]: E0115 05:56:24.227929 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.234045 kubelet[2864]: E0115 05:56:24.234022 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.234202 kubelet[2864]: W0115 05:56:24.234153 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.234202 kubelet[2864]: E0115 05:56:24.234184 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.235213 kubelet[2864]: E0115 05:56:24.235155 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.235213 kubelet[2864]: W0115 05:56:24.235174 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.235213 kubelet[2864]: E0115 05:56:24.235192 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.237902 kubelet[2864]: E0115 05:56:24.237885 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.237981 kubelet[2864]: W0115 05:56:24.237967 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.238064 kubelet[2864]: E0115 05:56:24.238050 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.238980 kubelet[2864]: E0115 05:56:24.238829 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.238980 kubelet[2864]: W0115 05:56:24.238845 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.238980 kubelet[2864]: E0115 05:56:24.238859 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.239526 kubelet[2864]: E0115 05:56:24.239506 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.239612 kubelet[2864]: W0115 05:56:24.239595 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.239685 kubelet[2864]: E0115 05:56:24.239669 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.240914 kubelet[2864]: E0115 05:56:24.240653 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.240914 kubelet[2864]: W0115 05:56:24.240669 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.241348 kubelet[2864]: E0115 05:56:24.241018 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.242054 kubelet[2864]: E0115 05:56:24.241997 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.242054 kubelet[2864]: W0115 05:56:24.242016 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.242054 kubelet[2864]: E0115 05:56:24.242030 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.243158 kubelet[2864]: E0115 05:56:24.243103 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.243158 kubelet[2864]: W0115 05:56:24.243122 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.243158 kubelet[2864]: E0115 05:56:24.243138 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.248068 kubelet[2864]: E0115 05:56:24.247901 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.248561 kubelet[2864]: W0115 05:56:24.248194 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.250564 kubelet[2864]: E0115 05:56:24.249999 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.258889 kubelet[2864]: E0115 05:56:24.258568 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.258889 kubelet[2864]: W0115 05:56:24.258659 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.258889 kubelet[2864]: E0115 05:56:24.258678 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.262787 kubelet[2864]: E0115 05:56:24.262599 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.262852 kubelet[2864]: W0115 05:56:24.262806 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.262852 kubelet[2864]: E0115 05:56:24.262831 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.270556 kubelet[2864]: E0115 05:56:24.270076 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.270556 kubelet[2864]: W0115 05:56:24.270196 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.270556 kubelet[2864]: E0115 05:56:24.270222 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.329173 containerd[1599]: time="2026-01-15T05:56:24.328954734Z" level=info msg="connecting to shim 9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01" address="unix:///run/containerd/s/377cfed586bba28d827b43786332e424f9eeb3bf3684923828be17ce11d8d920" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:56:24.372361 kubelet[2864]: E0115 05:56:24.371389 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:56:24.372361 kubelet[2864]: W0115 05:56:24.371423 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:56:24.372361 kubelet[2864]: E0115 05:56:24.371450 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:56:24.477868 systemd[1]: Started cri-containerd-4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d.scope - libcontainer container 4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d. Jan 15 05:56:24.556391 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 15 05:56:24.556532 kernel: audit: type=1325 audit(1768456584.533:534): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:24.533000 audit[3447]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:24.535584 systemd[1]: Started cri-containerd-9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01.scope - libcontainer container 9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01. Jan 15 05:56:24.533000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb9ffd610 a2=0 a3=7ffeb9ffd5fc items=0 ppid=3019 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.625834 kernel: audit: type=1300 audit(1768456584.533:534): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb9ffd610 a2=0 a3=7ffeb9ffd5fc items=0 ppid=3019 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:24.584000 audit[3447]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:24.663907 kernel: audit: type=1327 audit(1768456584.533:534): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:24.664014 kernel: audit: type=1325 audit(1768456584.584:535): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:24.584000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb9ffd610 a2=0 a3=0 items=0 ppid=3019 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:24.730429 kernel: audit: type=1300 audit(1768456584.584:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb9ffd610 a2=0 a3=0 items=0 ppid=3019 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.730529 kernel: audit: type=1327 audit(1768456584.584:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:24.730574 kernel: audit: type=1334 audit(1768456584.669:536): prog-id=149 op=LOAD Jan 15 05:56:24.669000 audit: BPF prog-id=149 op=LOAD Jan 15 05:56:24.674000 audit: BPF prog-id=150 op=LOAD Jan 15 05:56:24.749640 kernel: audit: type=1334 audit(1768456584.674:537): prog-id=150 op=LOAD Jan 15 05:56:24.674000 audit[3412]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.829165 kernel: audit: type=1300 audit(1768456584.674:537): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.829474 kernel: audit: type=1327 audit(1768456584.674:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.674000 audit: BPF prog-id=150 op=UNLOAD Jan 15 05:56:24.674000 audit[3412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.674000 audit: BPF prog-id=151 op=LOAD Jan 15 05:56:24.674000 audit[3412]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.674000 audit: BPF prog-id=152 op=LOAD Jan 15 05:56:24.674000 audit[3412]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.675000 audit: BPF prog-id=152 op=UNLOAD Jan 15 05:56:24.675000 audit[3412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.675000 audit: BPF prog-id=151 op=UNLOAD Jan 15 05:56:24.675000 audit[3412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.675000 audit: BPF prog-id=153 op=LOAD Jan 15 05:56:24.675000 audit[3412]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3371 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463303731646434623834323030346466653766316361613135633534 Jan 15 05:56:24.690000 audit: BPF prog-id=154 op=LOAD Jan 15 05:56:24.692000 audit: BPF prog-id=155 op=LOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=155 op=UNLOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=156 op=LOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=157 op=LOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=157 op=UNLOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=156 op=UNLOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.692000 audit: BPF prog-id=158 op=LOAD Jan 15 05:56:24.692000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3396 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:24.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964306133323461303434393332383964373161366263333164626531 Jan 15 05:56:24.923130 containerd[1599]: time="2026-01-15T05:56:24.922996409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jmr9t,Uid:767273fb-720d-447b-b848-3374e0b22308,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\"" Jan 15 05:56:24.925691 containerd[1599]: time="2026-01-15T05:56:24.925620924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9d86d6c6-hdg5c,Uid:de316cb9-676f-42bf-86d8-f2d208cd5404,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01\"" Jan 15 05:56:24.927924 kubelet[2864]: E0115 05:56:24.926220 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:24.929138 kubelet[2864]: E0115 05:56:24.929115 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:24.931984 containerd[1599]: time="2026-01-15T05:56:24.931955552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 05:56:25.163931 kubelet[2864]: E0115 05:56:25.163538 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:25.695055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551643315.mount: Deactivated successfully. Jan 15 05:56:25.967504 containerd[1599]: time="2026-01-15T05:56:25.966885133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:25.970452 containerd[1599]: time="2026-01-15T05:56:25.969621971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 15 05:56:25.973202 containerd[1599]: time="2026-01-15T05:56:25.973124969Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:25.981404 containerd[1599]: time="2026-01-15T05:56:25.981035532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:25.982416 containerd[1599]: time="2026-01-15T05:56:25.981945926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.04972581s" Jan 15 05:56:25.982416 containerd[1599]: time="2026-01-15T05:56:25.982073333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 05:56:25.990121 containerd[1599]: time="2026-01-15T05:56:25.989890330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 05:56:26.003632 containerd[1599]: time="2026-01-15T05:56:26.002665093Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 05:56:26.034860 containerd[1599]: time="2026-01-15T05:56:26.033900898Z" level=info msg="Container 073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:56:26.057565 containerd[1599]: time="2026-01-15T05:56:26.057497631Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b\"" Jan 15 05:56:26.062698 containerd[1599]: time="2026-01-15T05:56:26.062431837Z" level=info msg="StartContainer for \"073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b\"" Jan 15 05:56:26.066018 containerd[1599]: time="2026-01-15T05:56:26.065130140Z" level=info msg="connecting to shim 073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b" address="unix:///run/containerd/s/e26c7df083bb5e1141bdaedc87ce7dd4337a3c204cd6e7728730a19d99271cbb" protocol=ttrpc version=3 Jan 15 05:56:26.201033 systemd[1]: Started cri-containerd-073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b.scope - libcontainer container 073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b. Jan 15 05:56:26.403000 audit: BPF prog-id=159 op=LOAD Jan 15 05:56:26.403000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3371 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:26.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037333934393137316139386439393633323839333733633632623936 Jan 15 05:56:26.403000 audit: BPF prog-id=160 op=LOAD Jan 15 05:56:26.403000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3371 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:26.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037333934393137316139386439393633323839333733633632623936 Jan 15 05:56:26.403000 audit: BPF prog-id=160 op=UNLOAD Jan 15 05:56:26.403000 audit[3478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:26.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037333934393137316139386439393633323839333733633632623936 Jan 15 05:56:26.404000 audit: BPF prog-id=159 op=UNLOAD Jan 15 05:56:26.404000 audit[3478]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:26.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037333934393137316139386439393633323839333733633632623936 Jan 15 05:56:26.404000 audit: BPF prog-id=161 op=LOAD Jan 15 05:56:26.404000 audit[3478]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3371 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:26.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037333934393137316139386439393633323839333733633632623936 Jan 15 05:56:26.551370 containerd[1599]: time="2026-01-15T05:56:26.551110531Z" level=info msg="StartContainer for \"073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b\" returns successfully" Jan 15 05:56:26.590603 systemd[1]: cri-containerd-073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b.scope: Deactivated successfully. Jan 15 05:56:26.595000 audit: BPF prog-id=161 op=UNLOAD Jan 15 05:56:26.598873 containerd[1599]: time="2026-01-15T05:56:26.598838032Z" level=info msg="received container exit event container_id:\"073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b\" id:\"073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b\" pid:3491 exited_at:{seconds:1768456586 nanos:596811293}" Jan 15 05:56:26.769195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-073949171a98d9963289373c62b96203af1ce527355d5877b0604f1650fae45b-rootfs.mount: Deactivated successfully. Jan 15 05:56:27.163511 kubelet[2864]: E0115 05:56:27.163050 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:27.571100 kubelet[2864]: E0115 05:56:27.570585 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:29.163582 kubelet[2864]: E0115 05:56:29.163498 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:29.671988 containerd[1599]: time="2026-01-15T05:56:29.671423324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:29.675497 containerd[1599]: time="2026-01-15T05:56:29.673985208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 15 05:56:29.676587 containerd[1599]: time="2026-01-15T05:56:29.676057440Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:29.683093 containerd[1599]: time="2026-01-15T05:56:29.682895066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:29.684123 containerd[1599]: time="2026-01-15T05:56:29.683961606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.69393342s" Jan 15 05:56:29.684123 containerd[1599]: time="2026-01-15T05:56:29.684001721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 05:56:29.691403 containerd[1599]: time="2026-01-15T05:56:29.690449383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 05:56:29.752069 containerd[1599]: time="2026-01-15T05:56:29.751951316Z" level=info msg="CreateContainer within sandbox \"9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 05:56:29.775223 containerd[1599]: time="2026-01-15T05:56:29.774390181Z" level=info msg="Container 26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:56:29.797550 containerd[1599]: time="2026-01-15T05:56:29.797410129Z" level=info msg="CreateContainer within sandbox \"9d0a324a04493289d71a6bc31dbe1bf641ab703a882bc15b2c0447243cce5f01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1\"" Jan 15 05:56:29.799139 containerd[1599]: time="2026-01-15T05:56:29.799086178Z" level=info msg="StartContainer for \"26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1\"" Jan 15 05:56:29.802519 containerd[1599]: time="2026-01-15T05:56:29.801679777Z" level=info msg="connecting to shim 26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1" address="unix:///run/containerd/s/377cfed586bba28d827b43786332e424f9eeb3bf3684923828be17ce11d8d920" protocol=ttrpc version=3 Jan 15 05:56:29.889933 systemd[1]: Started cri-containerd-26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1.scope - libcontainer container 26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1. Jan 15 05:56:29.946000 audit: BPF prog-id=162 op=LOAD Jan 15 05:56:29.969475 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 15 05:56:29.969894 kernel: audit: type=1334 audit(1768456589.946:558): prog-id=162 op=LOAD Jan 15 05:56:29.950000 audit: BPF prog-id=163 op=LOAD Jan 15 05:56:29.950000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:30.024186 kernel: audit: type=1334 audit(1768456589.950:559): prog-id=163 op=LOAD Jan 15 05:56:30.024654 kernel: audit: type=1300 audit(1768456589.950:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:30.024699 kernel: audit: type=1327 audit(1768456589.950:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.951000 audit: BPF prog-id=163 op=UNLOAD Jan 15 05:56:29.951000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:30.100725 kernel: audit: type=1334 audit(1768456589.951:560): prog-id=163 op=UNLOAD Jan 15 05:56:30.101520 kernel: audit: type=1300 audit(1768456589.951:560): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.953000 audit: BPF prog-id=164 op=LOAD Jan 15 05:56:30.146709 kernel: audit: type=1327 audit(1768456589.951:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:30.146923 kernel: audit: type=1334 audit(1768456589.953:561): prog-id=164 op=LOAD Jan 15 05:56:30.186559 kernel: audit: type=1300 audit(1768456589.953:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.953000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:30.186884 containerd[1599]: time="2026-01-15T05:56:30.161606640Z" level=info msg="StartContainer for \"26b2368dd61e52e4dc3bbf61ebf59944f5d89d336077e7a619501ee897a92df1\" returns successfully" Jan 15 05:56:29.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:30.215532 kernel: audit: type=1327 audit(1768456589.953:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.953000 audit: BPF prog-id=165 op=LOAD Jan 15 05:56:29.953000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.953000 audit: BPF prog-id=165 op=UNLOAD Jan 15 05:56:29.953000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.953000 audit: BPF prog-id=164 op=UNLOAD Jan 15 05:56:29.953000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:29.953000 audit: BPF prog-id=166 op=LOAD Jan 15 05:56:29.953000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3396 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:29.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623233363864643631653532653464633362626636316562663539 Jan 15 05:56:30.643727 kubelet[2864]: E0115 05:56:30.630215 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:31.163677 kubelet[2864]: E0115 05:56:31.163015 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:31.638578 kubelet[2864]: E0115 05:56:31.637757 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:31.679697 kubelet[2864]: I0115 05:56:31.677097 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c9d86d6c6-hdg5c" podStartSLOduration=3.922247308 podStartE2EDuration="8.677077459s" podCreationTimestamp="2026-01-15 05:56:23 +0000 UTC" firstStartedPulling="2026-01-15 05:56:24.931679727 +0000 UTC m=+55.622595785" lastFinishedPulling="2026-01-15 05:56:29.686509878 +0000 UTC m=+60.377425936" observedRunningTime="2026-01-15 05:56:30.731541951 +0000 UTC m=+61.422458029" watchObservedRunningTime="2026-01-15 05:56:31.677077459 +0000 UTC m=+62.367993517" Jan 15 05:56:31.764000 audit[3588]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:31.764000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdcb62e3e0 a2=0 a3=7ffdcb62e3cc items=0 ppid=3019 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:31.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:31.770000 audit[3588]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:56:31.770000 audit[3588]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdcb62e3e0 a2=0 a3=7ffdcb62e3cc items=0 ppid=3019 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:31.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:56:32.661396 kubelet[2864]: E0115 05:56:32.661035 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:33.164442 kubelet[2864]: E0115 05:56:33.163474 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:35.164448 kubelet[2864]: E0115 05:56:35.163667 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:36.164613 kubelet[2864]: E0115 05:56:36.164566 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:37.163131 kubelet[2864]: E0115 05:56:37.163070 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:39.164066 kubelet[2864]: E0115 05:56:39.163714 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:39.459734 containerd[1599]: time="2026-01-15T05:56:39.458735752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:39.468166 containerd[1599]: time="2026-01-15T05:56:39.466975986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 05:56:39.475407 containerd[1599]: time="2026-01-15T05:56:39.475203447Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:39.496514 containerd[1599]: time="2026-01-15T05:56:39.496456610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:56:39.500408 containerd[1599]: time="2026-01-15T05:56:39.497748050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 9.807260145s" Jan 15 05:56:39.500599 containerd[1599]: time="2026-01-15T05:56:39.500568546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 05:56:39.559546 containerd[1599]: time="2026-01-15T05:56:39.559485615Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 05:56:39.648194 containerd[1599]: time="2026-01-15T05:56:39.647983588Z" level=info msg="Container 7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:56:39.695767 containerd[1599]: time="2026-01-15T05:56:39.695116989Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75\"" Jan 15 05:56:39.698183 containerd[1599]: time="2026-01-15T05:56:39.698148464Z" level=info msg="StartContainer for \"7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75\"" Jan 15 05:56:39.705660 containerd[1599]: time="2026-01-15T05:56:39.705564611Z" level=info msg="connecting to shim 7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75" address="unix:///run/containerd/s/e26c7df083bb5e1141bdaedc87ce7dd4337a3c204cd6e7728730a19d99271cbb" protocol=ttrpc version=3 Jan 15 05:56:39.906090 systemd[1]: Started cri-containerd-7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75.scope - libcontainer container 7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75. Jan 15 05:56:40.223000 audit: BPF prog-id=167 op=LOAD Jan 15 05:56:40.244122 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 15 05:56:40.244427 kernel: audit: type=1334 audit(1768456600.223:568): prog-id=167 op=LOAD Jan 15 05:56:40.223000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.285950 kernel: audit: type=1300 audit(1768456600.223:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.286504 kernel: audit: type=1327 audit(1768456600.223:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.225000 audit: BPF prog-id=168 op=LOAD Jan 15 05:56:40.225000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.394612 kernel: audit: type=1334 audit(1768456600.225:569): prog-id=168 op=LOAD Jan 15 05:56:40.395658 kernel: audit: type=1300 audit(1768456600.225:569): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.395704 kernel: audit: type=1327 audit(1768456600.225:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.450942 kernel: audit: type=1334 audit(1768456600.225:570): prog-id=168 op=UNLOAD Jan 15 05:56:40.225000 audit: BPF prog-id=168 op=UNLOAD Jan 15 05:56:40.225000 audit[3596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.512059 kernel: audit: type=1300 audit(1768456600.225:570): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.512209 kernel: audit: type=1327 audit(1768456600.225:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.576452 kernel: audit: type=1334 audit(1768456600.225:571): prog-id=167 op=UNLOAD Jan 15 05:56:40.225000 audit: BPF prog-id=167 op=UNLOAD Jan 15 05:56:40.225000 audit[3596]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.225000 audit: BPF prog-id=169 op=LOAD Jan 15 05:56:40.225000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3371 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:56:40.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663037396462343263356632653938333731616534333161633334 Jan 15 05:56:40.651731 containerd[1599]: time="2026-01-15T05:56:40.651039937Z" level=info msg="StartContainer for \"7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75\" returns successfully" Jan 15 05:56:40.930668 kubelet[2864]: E0115 05:56:40.917193 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:41.166624 kubelet[2864]: E0115 05:56:41.162523 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:41.938753 kubelet[2864]: E0115 05:56:41.938696 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:43.166494 kubelet[2864]: E0115 05:56:43.164433 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:43.166494 kubelet[2864]: E0115 05:56:43.164630 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:45.163478 kubelet[2864]: E0115 05:56:45.162993 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:45.305201 systemd[1]: cri-containerd-7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75.scope: Deactivated successfully. Jan 15 05:56:45.306211 systemd[1]: cri-containerd-7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75.scope: Consumed 3.951s CPU time, 179.2M memory peak, 3.4M read from disk, 171.3M written to disk. Jan 15 05:56:45.322000 audit: BPF prog-id=169 op=UNLOAD Jan 15 05:56:45.331519 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 15 05:56:45.331653 kernel: audit: type=1334 audit(1768456605.322:573): prog-id=169 op=UNLOAD Jan 15 05:56:45.343069 containerd[1599]: time="2026-01-15T05:56:45.340846070Z" level=info msg="received container exit event container_id:\"7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75\" id:\"7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75\" pid:3608 exited_at:{seconds:1768456605 nanos:337786525}" Jan 15 05:56:45.488742 kubelet[2864]: I0115 05:56:45.487777 2864 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 15 05:56:45.550783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cf079db42c5f2e98371ae431ac34769f435a0ffe5d38322a0bdc69bbcd92a75-rootfs.mount: Deactivated successfully. Jan 15 05:56:45.714819 systemd[1]: Created slice kubepods-besteffort-pod9a88b368_bf02_4b48_90b9_c88e30070c80.slice - libcontainer container kubepods-besteffort-pod9a88b368_bf02_4b48_90b9_c88e30070c80.slice. Jan 15 05:56:45.744431 kubelet[2864]: I0115 05:56:45.742728 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-ca-bundle\") pod \"whisker-7f6b66c6f5-g5499\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:56:45.744431 kubelet[2864]: I0115 05:56:45.742859 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkwk\" (UniqueName: \"kubernetes.io/projected/28dbae26-ae3c-40cb-b52b-26db1f4b6ea2-kube-api-access-xmkwk\") pod \"coredns-66bc5c9577-9hdpt\" (UID: \"28dbae26-ae3c-40cb-b52b-26db1f4b6ea2\") " pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:45.744431 kubelet[2864]: I0115 05:56:45.742966 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-backend-key-pair\") pod \"whisker-7f6b66c6f5-g5499\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:56:45.744431 kubelet[2864]: I0115 05:56:45.742995 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28dbae26-ae3c-40cb-b52b-26db1f4b6ea2-config-volume\") pod \"coredns-66bc5c9577-9hdpt\" (UID: \"28dbae26-ae3c-40cb-b52b-26db1f4b6ea2\") " pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:45.744431 kubelet[2864]: I0115 05:56:45.743094 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rngf\" (UniqueName: \"kubernetes.io/projected/9a88b368-bf02-4b48-90b9-c88e30070c80-kube-api-access-2rngf\") pod \"whisker-7f6b66c6f5-g5499\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:56:45.755607 systemd[1]: Created slice kubepods-burstable-pod28dbae26_ae3c_40cb_b52b_26db1f4b6ea2.slice - libcontainer container kubepods-burstable-pod28dbae26_ae3c_40cb_b52b_26db1f4b6ea2.slice. Jan 15 05:56:45.796555 systemd[1]: Created slice kubepods-burstable-podf1681ef8_d92a_4410_95a3_be947ed6bc57.slice - libcontainer container kubepods-burstable-podf1681ef8_d92a_4410_95a3_be947ed6bc57.slice. Jan 15 05:56:45.831410 systemd[1]: Created slice kubepods-besteffort-pod9f5c6d0a_fde4_4893_b36a_da65165e8843.slice - libcontainer container kubepods-besteffort-pod9f5c6d0a_fde4_4893_b36a_da65165e8843.slice. Jan 15 05:56:45.847513 systemd[1]: Created slice kubepods-besteffort-pod759e03fd_9efa_4510_b2ed_62c16a4c2e13.slice - libcontainer container kubepods-besteffort-pod759e03fd_9efa_4510_b2ed_62c16a4c2e13.slice. Jan 15 05:56:45.849094 kubelet[2864]: I0115 05:56:45.848744 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgrf\" (UniqueName: \"kubernetes.io/projected/f1681ef8-d92a-4410-95a3-be947ed6bc57-kube-api-access-ctgrf\") pod \"coredns-66bc5c9577-lm2t2\" (UID: \"f1681ef8-d92a-4410-95a3-be947ed6bc57\") " pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:45.849094 kubelet[2864]: I0115 05:56:45.849051 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125448ce-e54b-4cc3-923a-6bb87264173b-config\") pod \"goldmane-7c778bb748-nxntl\" (UID: \"125448ce-e54b-4cc3-923a-6bb87264173b\") " pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:45.849526 kubelet[2864]: I0115 05:56:45.849495 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvwsv\" (UniqueName: \"kubernetes.io/projected/9f5c6d0a-fde4-4893-b36a-da65165e8843-kube-api-access-nvwsv\") pod \"calico-kube-controllers-98d64bddf-vgrjr\" (UID: \"9f5c6d0a-fde4-4893-b36a-da65165e8843\") " pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:56:45.849587 kubelet[2864]: I0115 05:56:45.849529 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1681ef8-d92a-4410-95a3-be947ed6bc57-config-volume\") pod \"coredns-66bc5c9577-lm2t2\" (UID: \"f1681ef8-d92a-4410-95a3-be947ed6bc57\") " pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:45.849587 kubelet[2864]: I0115 05:56:45.849556 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585lm\" (UniqueName: \"kubernetes.io/projected/125448ce-e54b-4cc3-923a-6bb87264173b-kube-api-access-585lm\") pod \"goldmane-7c778bb748-nxntl\" (UID: \"125448ce-e54b-4cc3-923a-6bb87264173b\") " pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:45.849587 kubelet[2864]: I0115 05:56:45.849579 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/759e03fd-9efa-4510-b2ed-62c16a4c2e13-calico-apiserver-certs\") pod \"calico-apiserver-ffcfc74f7-h9kdb\" (UID: \"759e03fd-9efa-4510-b2ed-62c16a4c2e13\") " pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:56:45.849716 kubelet[2864]: I0115 05:56:45.849619 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125448ce-e54b-4cc3-923a-6bb87264173b-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-nxntl\" (UID: \"125448ce-e54b-4cc3-923a-6bb87264173b\") " pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:45.854597 kubelet[2864]: I0115 05:56:45.854401 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f5c6d0a-fde4-4893-b36a-da65165e8843-tigera-ca-bundle\") pod \"calico-kube-controllers-98d64bddf-vgrjr\" (UID: \"9f5c6d0a-fde4-4893-b36a-da65165e8843\") " pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:56:45.854597 kubelet[2864]: I0115 05:56:45.854528 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9ae406d-9e12-445c-a7c0-69e8063e9379-calico-apiserver-certs\") pod \"calico-apiserver-ffcfc74f7-b2c68\" (UID: \"b9ae406d-9e12-445c-a7c0-69e8063e9379\") " pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:56:45.854709 kubelet[2864]: I0115 05:56:45.854665 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/125448ce-e54b-4cc3-923a-6bb87264173b-goldmane-key-pair\") pod \"goldmane-7c778bb748-nxntl\" (UID: \"125448ce-e54b-4cc3-923a-6bb87264173b\") " pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:45.854709 kubelet[2864]: I0115 05:56:45.854694 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrtp\" (UniqueName: \"kubernetes.io/projected/759e03fd-9efa-4510-b2ed-62c16a4c2e13-kube-api-access-9jrtp\") pod \"calico-apiserver-ffcfc74f7-h9kdb\" (UID: \"759e03fd-9efa-4510-b2ed-62c16a4c2e13\") " pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:56:45.856856 kubelet[2864]: I0115 05:56:45.854827 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tz8t\" (UniqueName: \"kubernetes.io/projected/b9ae406d-9e12-445c-a7c0-69e8063e9379-kube-api-access-7tz8t\") pod \"calico-apiserver-ffcfc74f7-b2c68\" (UID: \"b9ae406d-9e12-445c-a7c0-69e8063e9379\") " pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:56:45.865066 systemd[1]: Created slice kubepods-besteffort-pod125448ce_e54b_4cc3_923a_6bb87264173b.slice - libcontainer container kubepods-besteffort-pod125448ce_e54b_4cc3_923a_6bb87264173b.slice. Jan 15 05:56:45.882175 systemd[1]: Created slice kubepods-besteffort-podb9ae406d_9e12_445c_a7c0_69e8063e9379.slice - libcontainer container kubepods-besteffort-podb9ae406d_9e12_445c_a7c0_69e8063e9379.slice. Jan 15 05:56:46.041568 kubelet[2864]: E0115 05:56:46.040614 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:46.051128 containerd[1599]: time="2026-01-15T05:56:46.050565266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 05:56:46.057206 containerd[1599]: time="2026-01-15T05:56:46.056685378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6b66c6f5-g5499,Uid:9a88b368-bf02-4b48-90b9-c88e30070c80,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:46.095736 kubelet[2864]: E0115 05:56:46.095644 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:46.106609 containerd[1599]: time="2026-01-15T05:56:46.106558139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,}" Jan 15 05:56:46.121550 kubelet[2864]: E0115 05:56:46.121069 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:46.131737 containerd[1599]: time="2026-01-15T05:56:46.126008896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,}" Jan 15 05:56:46.185671 kubelet[2864]: E0115 05:56:46.185127 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:46.221479 containerd[1599]: time="2026-01-15T05:56:46.218989748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:46.221479 containerd[1599]: time="2026-01-15T05:56:46.219135119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:56:46.225030 containerd[1599]: time="2026-01-15T05:56:46.224568751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:46.231193 containerd[1599]: time="2026-01-15T05:56:46.231073047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:56:46.906467 containerd[1599]: time="2026-01-15T05:56:46.904789756Z" level=error msg="Failed to destroy network for sandbox \"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.919021 systemd[1]: run-netns-cni\x2d847f4bb8\x2d3d14\x2db99a\x2d0366\x2d468626da8c0e.mount: Deactivated successfully. Jan 15 05:56:46.955069 containerd[1599]: time="2026-01-15T05:56:46.954759473Z" level=error msg="Failed to destroy network for sandbox \"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.958844 systemd[1]: run-netns-cni\x2d4fb0d826\x2d21d9\x2dc1bc\x2d2234\x2dbd97974ba3d0.mount: Deactivated successfully. Jan 15 05:56:46.978720 containerd[1599]: time="2026-01-15T05:56:46.978511469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.981106 kubelet[2864]: E0115 05:56:46.980636 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.981106 kubelet[2864]: E0115 05:56:46.980794 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:56:46.981106 kubelet[2864]: E0115 05:56:46.980823 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:56:46.981482 kubelet[2864]: E0115 05:56:46.980878 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"157feeb19e21ac1531d972d0d6b8c5be7df1236461b5c8019fa6b122869ad70d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:56:46.986677 containerd[1599]: time="2026-01-15T05:56:46.986462393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6b66c6f5-g5499,Uid:9a88b368-bf02-4b48-90b9-c88e30070c80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.987781 kubelet[2864]: E0115 05:56:46.987736 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:46.987864 kubelet[2864]: E0115 05:56:46.987803 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:56:46.992069 kubelet[2864]: E0115 05:56:46.991428 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:56:46.992069 kubelet[2864]: E0115 05:56:46.992015 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f6b66c6f5-g5499_calico-system(9a88b368-bf02-4b48-90b9-c88e30070c80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f6b66c6f5-g5499_calico-system(9a88b368-bf02-4b48-90b9-c88e30070c80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9a5b6b4cceab02e0b9a0d4383810766c8e535203ddc7e9206128e22f57b4b48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f6b66c6f5-g5499" podUID="9a88b368-bf02-4b48-90b9-c88e30070c80" Jan 15 05:56:47.011800 containerd[1599]: time="2026-01-15T05:56:47.010804481Z" level=error msg="Failed to destroy network for sandbox \"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.025859 systemd[1]: run-netns-cni\x2d13e639d6\x2d7c57\x2decdf\x2d50cb\x2d1f63eb4395ed.mount: Deactivated successfully. Jan 15 05:56:47.039042 containerd[1599]: time="2026-01-15T05:56:47.038987809Z" level=error msg="Failed to destroy network for sandbox \"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.046134 containerd[1599]: time="2026-01-15T05:56:47.045671556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.049722 systemd[1]: run-netns-cni\x2dc7bbcf84\x2df187\x2ddcf1\x2d3079\x2dfc45224a37b8.mount: Deactivated successfully. Jan 15 05:56:47.052754 kubelet[2864]: E0115 05:56:47.051674 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.052754 kubelet[2864]: E0115 05:56:47.051735 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:47.052754 kubelet[2864]: E0115 05:56:47.051762 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:47.053006 kubelet[2864]: E0115 05:56:47.051824 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9hdpt_kube-system(28dbae26-ae3c-40cb-b52b-26db1f4b6ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9hdpt_kube-system(28dbae26-ae3c-40cb-b52b-26db1f4b6ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"603431de7ea54ee80d34cf17a6620f896f2a41ea41adf04907beb985086ad59f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9hdpt" podUID="28dbae26-ae3c-40cb-b52b-26db1f4b6ea2" Jan 15 05:56:47.056790 containerd[1599]: time="2026-01-15T05:56:47.056106674Z" level=error msg="Failed to destroy network for sandbox \"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.060130 containerd[1599]: time="2026-01-15T05:56:47.058631200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.062132 kubelet[2864]: E0115 05:56:47.061545 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.062132 kubelet[2864]: E0115 05:56:47.061592 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:56:47.062132 kubelet[2864]: E0115 05:56:47.061715 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:56:47.062472 kubelet[2864]: E0115 05:56:47.061771 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23f9d97ce4322532a660b0f3cfa3b9f68158f218f41fe82138ff3dd81497c649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:56:47.077052 containerd[1599]: time="2026-01-15T05:56:47.076989011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.082858 kubelet[2864]: E0115 05:56:47.082675 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.082858 kubelet[2864]: E0115 05:56:47.082730 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:47.082858 kubelet[2864]: E0115 05:56:47.082752 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:47.083575 kubelet[2864]: E0115 05:56:47.082802 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d7f61aab9473ccebe2ce7754857e6fdd716b74987e6ee38ef1c266ac19bfc81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:56:47.098574 containerd[1599]: time="2026-01-15T05:56:47.098533698Z" level=error msg="Failed to destroy network for sandbox \"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.122762 containerd[1599]: time="2026-01-15T05:56:47.122702559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.125784 kubelet[2864]: E0115 05:56:47.123692 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.125784 kubelet[2864]: E0115 05:56:47.123761 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:56:47.125784 kubelet[2864]: E0115 05:56:47.123786 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:56:47.126206 kubelet[2864]: E0115 05:56:47.123848 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1f4ad6b1e5936d85d42df306811eaf1df135c06b4082665aaac102e157344c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:56:47.127097 containerd[1599]: time="2026-01-15T05:56:47.126869647Z" level=error msg="Failed to destroy network for sandbox \"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.138001 containerd[1599]: time="2026-01-15T05:56:47.137590390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.139461 kubelet[2864]: E0115 05:56:47.137871 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.139461 kubelet[2864]: E0115 05:56:47.138027 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:47.139461 kubelet[2864]: E0115 05:56:47.138054 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:47.139587 kubelet[2864]: E0115 05:56:47.138114 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lm2t2_kube-system(f1681ef8-d92a-4410-95a3-be947ed6bc57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lm2t2_kube-system(f1681ef8-d92a-4410-95a3-be947ed6bc57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02ce2c90a12b2cd04b1ec3f5c3815ac776adebfa8e1aabad0e798a956cc28eed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lm2t2" podUID="f1681ef8-d92a-4410-95a3-be947ed6bc57" Jan 15 05:56:47.183564 systemd[1]: Created slice kubepods-besteffort-pod94de96e0_d8e2_4380_a60f_000b8e6b1786.slice - libcontainer container kubepods-besteffort-pod94de96e0_d8e2_4380_a60f_000b8e6b1786.slice. Jan 15 05:56:47.197991 containerd[1599]: time="2026-01-15T05:56:47.197732776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:47.532673 containerd[1599]: time="2026-01-15T05:56:47.531042424Z" level=error msg="Failed to destroy network for sandbox \"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.542522 containerd[1599]: time="2026-01-15T05:56:47.541722684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.543691 kubelet[2864]: E0115 05:56:47.543535 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:47.544535 kubelet[2864]: E0115 05:56:47.543684 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:47.544535 kubelet[2864]: E0115 05:56:47.543720 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glvpn" Jan 15 05:56:47.544535 kubelet[2864]: E0115 05:56:47.543801 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb65ccce097ee58fd520b0b91f346e0d0b65b82437378bf4ebe5a53bd634a50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:56:47.550447 systemd[1]: run-netns-cni\x2d4e56d0ef\x2d0067\x2d956e\x2d1b5a\x2d7ccc94e49973.mount: Deactivated successfully. Jan 15 05:56:47.550707 systemd[1]: run-netns-cni\x2d2bfeb2ea\x2d5951\x2d2e31\x2dbba3\x2d73fb59804445.mount: Deactivated successfully. Jan 15 05:56:47.550827 systemd[1]: run-netns-cni\x2d279788b7\x2d1831\x2d70b9\x2df6f2\x2de73d87de2270.mount: Deactivated successfully. Jan 15 05:56:52.164527 kubelet[2864]: E0115 05:56:52.164411 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:58.231220 kubelet[2864]: E0115 05:56:58.230901 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:58.247861 containerd[1599]: time="2026-01-15T05:56:58.237383856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,}" Jan 15 05:56:58.283215 kubelet[2864]: E0115 05:56:58.280820 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:56:58.286440 containerd[1599]: time="2026-01-15T05:56:58.285905643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,}" Jan 15 05:56:58.669451 containerd[1599]: time="2026-01-15T05:56:58.668819224Z" level=error msg="Failed to destroy network for sandbox \"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.679660 systemd[1]: run-netns-cni\x2da6db4fbf\x2d33a8\x2dbbed\x2da925\x2d4ec0310947d7.mount: Deactivated successfully. Jan 15 05:56:58.684875 containerd[1599]: time="2026-01-15T05:56:58.680960968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.685445 kubelet[2864]: E0115 05:56:58.682904 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.685445 kubelet[2864]: E0115 05:56:58.682977 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:58.685445 kubelet[2864]: E0115 05:56:58.683116 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hdpt" Jan 15 05:56:58.685604 kubelet[2864]: E0115 05:56:58.683191 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9hdpt_kube-system(28dbae26-ae3c-40cb-b52b-26db1f4b6ea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9hdpt_kube-system(28dbae26-ae3c-40cb-b52b-26db1f4b6ea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3698ab1d96a481501b474aec0fb3618c7c9b944f4ebe144181accca0f10844\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9hdpt" podUID="28dbae26-ae3c-40cb-b52b-26db1f4b6ea2" Jan 15 05:56:58.699739 containerd[1599]: time="2026-01-15T05:56:58.699564305Z" level=error msg="Failed to destroy network for sandbox \"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.710452 systemd[1]: run-netns-cni\x2da92abdf8\x2dcace\x2d960a\x2d490d\x2da46c10fef8e9.mount: Deactivated successfully. Jan 15 05:56:58.713611 kubelet[2864]: E0115 05:56:58.712116 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.713611 kubelet[2864]: E0115 05:56:58.712206 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:58.713611 kubelet[2864]: E0115 05:56:58.712501 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lm2t2" Jan 15 05:56:58.713790 containerd[1599]: time="2026-01-15T05:56:58.711767409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:58.714158 kubelet[2864]: E0115 05:56:58.713680 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lm2t2_kube-system(f1681ef8-d92a-4410-95a3-be947ed6bc57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lm2t2_kube-system(f1681ef8-d92a-4410-95a3-be947ed6bc57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a761689067733d47e837d1dda34e1b9601d75b30a0834dd982a7f4600c48c757\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lm2t2" podUID="f1681ef8-d92a-4410-95a3-be947ed6bc57" Jan 15 05:56:59.170152 containerd[1599]: time="2026-01-15T05:56:59.169933370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,}" Jan 15 05:56:59.394688 containerd[1599]: time="2026-01-15T05:56:59.394633706Z" level=error msg="Failed to destroy network for sandbox \"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:59.401659 containerd[1599]: time="2026-01-15T05:56:59.401132511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:59.401420 systemd[1]: run-netns-cni\x2ddc7029fd\x2d6d3c\x2dc4a2\x2d5f12\x2dc1cc03e93d69.mount: Deactivated successfully. Jan 15 05:56:59.403768 kubelet[2864]: E0115 05:56:59.403489 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:56:59.406522 kubelet[2864]: E0115 05:56:59.405677 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:59.406522 kubelet[2864]: E0115 05:56:59.405804 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-nxntl" Jan 15 05:56:59.406522 kubelet[2864]: E0115 05:56:59.405856 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab9c1711e571e98cff39d9dfb24b55cab6d545f198fcffdd2e404882829b0f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:01.172615 containerd[1599]: time="2026-01-15T05:57:01.171727009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6b66c6f5-g5499,Uid:9a88b368-bf02-4b48-90b9-c88e30070c80,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:01.441206 containerd[1599]: time="2026-01-15T05:57:01.440222793Z" level=error msg="Failed to destroy network for sandbox \"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:01.449870 systemd[1]: run-netns-cni\x2d3dfc4ef5\x2d6aba\x2ddf56\x2dd263\x2df9d314beb546.mount: Deactivated successfully. Jan 15 05:57:01.521595 containerd[1599]: time="2026-01-15T05:57:01.521475047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f6b66c6f5-g5499,Uid:9a88b368-bf02-4b48-90b9-c88e30070c80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:01.522810 kubelet[2864]: E0115 05:57:01.522179 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:01.522810 kubelet[2864]: E0115 05:57:01.522484 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:57:01.522810 kubelet[2864]: E0115 05:57:01.522509 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f6b66c6f5-g5499" Jan 15 05:57:01.523977 kubelet[2864]: E0115 05:57:01.522564 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f6b66c6f5-g5499_calico-system(9a88b368-bf02-4b48-90b9-c88e30070c80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f6b66c6f5-g5499_calico-system(9a88b368-bf02-4b48-90b9-c88e30070c80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f41739805dc1f553b8651ecbb2ac2591e8c0771827ecd6b6924ba262775e596\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f6b66c6f5-g5499" podUID="9a88b368-bf02-4b48-90b9-c88e30070c80" Jan 15 05:57:02.180891 containerd[1599]: time="2026-01-15T05:57:02.180735092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:02.186586 containerd[1599]: time="2026-01-15T05:57:02.186102595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:02.190967 containerd[1599]: time="2026-01-15T05:57:02.190935517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:57:02.192833 containerd[1599]: time="2026-01-15T05:57:02.192431439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:57:02.593325 containerd[1599]: time="2026-01-15T05:57:02.592810430Z" level=error msg="Failed to destroy network for sandbox \"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.601831 systemd[1]: run-netns-cni\x2de0fa56b1\x2d9061\x2d31b5\x2d0b6d\x2dafeb8fb8366f.mount: Deactivated successfully. Jan 15 05:57:02.613479 containerd[1599]: time="2026-01-15T05:57:02.613123782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.614936 kubelet[2864]: E0115 05:57:02.614813 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.614936 kubelet[2864]: E0115 05:57:02.614878 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glvpn" Jan 15 05:57:02.614936 kubelet[2864]: E0115 05:57:02.614899 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-glvpn" Jan 15 05:57:02.620725 kubelet[2864]: E0115 05:57:02.620125 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f4b478e1ff2a62f09e61130780886dc9571ba4a17c073d2b009e6889d3d1c24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:02.652074 containerd[1599]: time="2026-01-15T05:57:02.651859147Z" level=error msg="Failed to destroy network for sandbox \"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.668885 containerd[1599]: time="2026-01-15T05:57:02.668189534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.673546 kubelet[2864]: E0115 05:57:02.668786 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.673546 kubelet[2864]: E0115 05:57:02.668834 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:57:02.673546 kubelet[2864]: E0115 05:57:02.668855 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" Jan 15 05:57:02.673657 kubelet[2864]: E0115 05:57:02.668903 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22ab8467eaa0b0e8475fd4f9c0043be31f80fc7021f973ed04ff8cae6714835c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:02.684592 containerd[1599]: time="2026-01-15T05:57:02.683095300Z" level=error msg="Failed to destroy network for sandbox \"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.701612 containerd[1599]: time="2026-01-15T05:57:02.701457714Z" level=error msg="Failed to destroy network for sandbox \"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.701865 containerd[1599]: time="2026-01-15T05:57:02.701506088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.703556 kubelet[2864]: E0115 05:57:02.703520 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.703726 kubelet[2864]: E0115 05:57:02.703701 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:57:02.703851 kubelet[2864]: E0115 05:57:02.703798 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" Jan 15 05:57:02.704911 kubelet[2864]: E0115 05:57:02.704866 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"839c135dd4f7b491a731601b9b366d74c142e59998a44eec64969b7124ade907\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:02.711716 containerd[1599]: time="2026-01-15T05:57:02.710794718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.712177 kubelet[2864]: E0115 05:57:02.711083 2864 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:57:02.712177 kubelet[2864]: E0115 05:57:02.711134 2864 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:57:02.712177 kubelet[2864]: E0115 05:57:02.711164 2864 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" Jan 15 05:57:02.712621 kubelet[2864]: E0115 05:57:02.711215 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34f4dab3c05bced7c079c7b9fd8814ea38d0a4a9a872f0c03e1537674c446ce8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:03.203439 systemd[1]: run-netns-cni\x2dd24a1e4a\x2d1a98\x2d82c7\x2d67c2\x2d56d42b7572a8.mount: Deactivated successfully. Jan 15 05:57:03.203576 systemd[1]: run-netns-cni\x2d259ebff6\x2dcf29\x2d025a\x2d4106\x2d7a3eaa7484a8.mount: Deactivated successfully. Jan 15 05:57:03.203676 systemd[1]: run-netns-cni\x2dca47d5d8\x2dc656\x2d10a6\x2d0577\x2dd092ccb2dbf7.mount: Deactivated successfully. Jan 15 05:57:03.741204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794679928.mount: Deactivated successfully. Jan 15 05:57:03.811120 containerd[1599]: time="2026-01-15T05:57:03.807784162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:57:03.814395 containerd[1599]: time="2026-01-15T05:57:03.814360044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 05:57:03.817672 containerd[1599]: time="2026-01-15T05:57:03.817542341Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:57:03.824419 containerd[1599]: time="2026-01-15T05:57:03.824044140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:57:03.824828 containerd[1599]: time="2026-01-15T05:57:03.824796498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 17.774092403s" Jan 15 05:57:03.825057 containerd[1599]: time="2026-01-15T05:57:03.824916077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 05:57:03.877055 containerd[1599]: time="2026-01-15T05:57:03.876759064Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 05:57:03.902404 containerd[1599]: time="2026-01-15T05:57:03.902154294Z" level=info msg="Container 61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:57:03.924616 containerd[1599]: time="2026-01-15T05:57:03.924449672Z" level=info msg="CreateContainer within sandbox \"4c071dd4b842004dfe7f1caa15c54544a03db61b343cd7a0a83ab6632b69e06d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc\"" Jan 15 05:57:03.926656 containerd[1599]: time="2026-01-15T05:57:03.926159669Z" level=info msg="StartContainer for \"61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc\"" Jan 15 05:57:03.929557 containerd[1599]: time="2026-01-15T05:57:03.929434035Z" level=info msg="connecting to shim 61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc" address="unix:///run/containerd/s/e26c7df083bb5e1141bdaedc87ce7dd4337a3c204cd6e7728730a19d99271cbb" protocol=ttrpc version=3 Jan 15 05:57:04.053100 systemd[1]: Started cri-containerd-61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc.scope - libcontainer container 61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc. Jan 15 05:57:04.202000 audit: BPF prog-id=170 op=LOAD Jan 15 05:57:04.217575 kernel: audit: type=1334 audit(1768456624.202:574): prog-id=170 op=LOAD Jan 15 05:57:04.202000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.260649 kernel: audit: type=1300 audit(1768456624.202:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.260804 kernel: audit: type=1327 audit(1768456624.202:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: BPF prog-id=171 op=LOAD Jan 15 05:57:04.318548 kernel: audit: type=1334 audit(1768456624.202:575): prog-id=171 op=LOAD Jan 15 05:57:04.318677 kernel: audit: type=1300 audit(1768456624.202:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.202000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.362213 kernel: audit: type=1327 audit(1768456624.202:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.396606 kernel: audit: type=1334 audit(1768456624.202:576): prog-id=171 op=UNLOAD Jan 15 05:57:04.202000 audit: BPF prog-id=171 op=UNLOAD Jan 15 05:57:04.202000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.425768 containerd[1599]: time="2026-01-15T05:57:04.423512710Z" level=info msg="StartContainer for \"61b29ba50e705484188e0645ce8696056e94f73f07c5ee11d929cce06c89a0dc\" returns successfully" Jan 15 05:57:04.445612 kernel: audit: type=1300 audit(1768456624.202:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.448651 kernel: audit: type=1327 audit(1768456624.202:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: BPF prog-id=170 op=UNLOAD Jan 15 05:57:04.202000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.490096 kernel: audit: type=1334 audit(1768456624.202:577): prog-id=170 op=UNLOAD Jan 15 05:57:04.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.202000 audit: BPF prog-id=172 op=LOAD Jan 15 05:57:04.202000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3371 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:04.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631623239626135306537303534383431383865303634356365383639 Jan 15 05:57:04.828198 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 05:57:04.833390 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 05:57:05.252650 kubelet[2864]: E0115 05:57:05.248830 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:05.339047 kubelet[2864]: I0115 05:57:05.338598 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jmr9t" podStartSLOduration=3.442447699 podStartE2EDuration="42.338581329s" podCreationTimestamp="2026-01-15 05:56:23 +0000 UTC" firstStartedPulling="2026-01-15 05:56:24.93151538 +0000 UTC m=+55.622431438" lastFinishedPulling="2026-01-15 05:57:03.82764901 +0000 UTC m=+94.518565068" observedRunningTime="2026-01-15 05:57:05.328058503 +0000 UTC m=+96.018974571" watchObservedRunningTime="2026-01-15 05:57:05.338581329 +0000 UTC m=+96.029497377" Jan 15 05:57:05.437122 kubelet[2864]: I0115 05:57:05.436777 2864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-ca-bundle\") pod \"9a88b368-bf02-4b48-90b9-c88e30070c80\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " Jan 15 05:57:05.437122 kubelet[2864]: I0115 05:57:05.436840 2864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rngf\" (UniqueName: \"kubernetes.io/projected/9a88b368-bf02-4b48-90b9-c88e30070c80-kube-api-access-2rngf\") pod \"9a88b368-bf02-4b48-90b9-c88e30070c80\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " Jan 15 05:57:05.437122 kubelet[2864]: I0115 05:57:05.436864 2864 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-backend-key-pair\") pod \"9a88b368-bf02-4b48-90b9-c88e30070c80\" (UID: \"9a88b368-bf02-4b48-90b9-c88e30070c80\") " Jan 15 05:57:05.441183 kubelet[2864]: I0115 05:57:05.439219 2864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a88b368-bf02-4b48-90b9-c88e30070c80" (UID: "9a88b368-bf02-4b48-90b9-c88e30070c80"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 05:57:05.463779 systemd[1]: var-lib-kubelet-pods-9a88b368\x2dbf02\x2d4b48\x2d90b9\x2dc88e30070c80-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 05:57:05.468153 kubelet[2864]: I0115 05:57:05.468077 2864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a88b368-bf02-4b48-90b9-c88e30070c80" (UID: "9a88b368-bf02-4b48-90b9-c88e30070c80"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 05:57:05.513153 systemd[1]: var-lib-kubelet-pods-9a88b368\x2dbf02\x2d4b48\x2d90b9\x2dc88e30070c80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rngf.mount: Deactivated successfully. Jan 15 05:57:05.516481 kubelet[2864]: I0115 05:57:05.514440 2864 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a88b368-bf02-4b48-90b9-c88e30070c80-kube-api-access-2rngf" (OuterVolumeSpecName: "kube-api-access-2rngf") pod "9a88b368-bf02-4b48-90b9-c88e30070c80" (UID: "9a88b368-bf02-4b48-90b9-c88e30070c80"). InnerVolumeSpecName "kube-api-access-2rngf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 05:57:05.540003 kubelet[2864]: I0115 05:57:05.539654 2864 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 15 05:57:05.541737 kubelet[2864]: I0115 05:57:05.541472 2864 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rngf\" (UniqueName: \"kubernetes.io/projected/9a88b368-bf02-4b48-90b9-c88e30070c80-kube-api-access-2rngf\") on node \"localhost\" DevicePath \"\"" Jan 15 05:57:05.541737 kubelet[2864]: I0115 05:57:05.541550 2864 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a88b368-bf02-4b48-90b9-c88e30070c80-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 15 05:57:06.186159 systemd[1]: Removed slice kubepods-besteffort-pod9a88b368_bf02_4b48_90b9_c88e30070c80.slice - libcontainer container kubepods-besteffort-pod9a88b368_bf02_4b48_90b9_c88e30070c80.slice. Jan 15 05:57:06.256417 kubelet[2864]: E0115 05:57:06.255835 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:06.561723 kubelet[2864]: I0115 05:57:06.561013 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf-whisker-backend-key-pair\") pod \"whisker-74f7495bcf-nsnsl\" (UID: \"4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf\") " pod="calico-system/whisker-74f7495bcf-nsnsl" Jan 15 05:57:06.561723 kubelet[2864]: I0115 05:57:06.561169 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf-whisker-ca-bundle\") pod \"whisker-74f7495bcf-nsnsl\" (UID: \"4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf\") " pod="calico-system/whisker-74f7495bcf-nsnsl" Jan 15 05:57:06.561723 kubelet[2864]: I0115 05:57:06.561199 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr462\" (UniqueName: \"kubernetes.io/projected/4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf-kube-api-access-sr462\") pod \"whisker-74f7495bcf-nsnsl\" (UID: \"4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf\") " pod="calico-system/whisker-74f7495bcf-nsnsl" Jan 15 05:57:06.593108 systemd[1]: Created slice kubepods-besteffort-pod4719fe8a_c2f6_4614_8c44_0ea32f2ef4cf.slice - libcontainer container kubepods-besteffort-pod4719fe8a_c2f6_4614_8c44_0ea32f2ef4cf.slice. Jan 15 05:57:06.954191 containerd[1599]: time="2026-01-15T05:57:06.953211529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f7495bcf-nsnsl,Uid:4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:08.213968 kubelet[2864]: I0115 05:57:08.212664 2864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a88b368-bf02-4b48-90b9-c88e30070c80" path="/var/lib/kubelet/pods/9a88b368-bf02-4b48-90b9-c88e30070c80/volumes" Jan 15 05:57:08.221558 systemd-networkd[1494]: calie62edd4d7ba: Link UP Jan 15 05:57:08.222967 systemd-networkd[1494]: calie62edd4d7ba: Gained carrier Jan 15 05:57:08.290528 containerd[1599]: 2026-01-15 05:57:07.155 [INFO][4294] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 05:57:08.290528 containerd[1599]: 2026-01-15 05:57:07.278 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74f7495bcf--nsnsl-eth0 whisker-74f7495bcf- calico-system 4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf 1032 0 2026-01-15 05:57:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74f7495bcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74f7495bcf-nsnsl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie62edd4d7ba [] [] }} ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-" Jan 15 05:57:08.290528 containerd[1599]: 2026-01-15 05:57:07.278 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.290528 containerd[1599]: 2026-01-15 05:57:07.906 [INFO][4331] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" HandleID="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Workload="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:07.914 [INFO][4331] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" HandleID="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Workload="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000480fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74f7495bcf-nsnsl", "timestamp":"2026-01-15 05:57:07.90661828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:07.914 [INFO][4331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:07.915 [INFO][4331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:07.919 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:07.979 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" host="localhost" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:08.004 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:08.039 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:08.046 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:08.054 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:08.293078 containerd[1599]: 2026-01-15 05:57:08.054 [INFO][4331] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" host="localhost" Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.062 [INFO][4331] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3 Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.077 [INFO][4331] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" host="localhost" Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.101 [INFO][4331] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" host="localhost" Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.102 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" host="localhost" Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.102 [INFO][4331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:08.299975 containerd[1599]: 2026-01-15 05:57:08.102 [INFO][4331] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" HandleID="k8s-pod-network.7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Workload="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.300199 containerd[1599]: 2026-01-15 05:57:08.145 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74f7495bcf--nsnsl-eth0", GenerateName:"whisker-74f7495bcf-", Namespace:"calico-system", SelfLink:"", UID:"4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f7495bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74f7495bcf-nsnsl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie62edd4d7ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:08.300199 containerd[1599]: 2026-01-15 05:57:08.146 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.301061 containerd[1599]: 2026-01-15 05:57:08.146 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie62edd4d7ba ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.301061 containerd[1599]: 2026-01-15 05:57:08.224 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.301148 containerd[1599]: 2026-01-15 05:57:08.225 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74f7495bcf--nsnsl-eth0", GenerateName:"whisker-74f7495bcf-", Namespace:"calico-system", SelfLink:"", UID:"4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 57, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f7495bcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3", Pod:"whisker-74f7495bcf-nsnsl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie62edd4d7ba", MAC:"96:89:62:2a:5b:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:08.302608 containerd[1599]: 2026-01-15 05:57:08.269 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" Namespace="calico-system" Pod="whisker-74f7495bcf-nsnsl" WorkloadEndpoint="localhost-k8s-whisker--74f7495bcf--nsnsl-eth0" Jan 15 05:57:08.833661 containerd[1599]: time="2026-01-15T05:57:08.831979962Z" level=info msg="connecting to shim 7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3" address="unix:///run/containerd/s/630defa76e08b5b32bb136d821232492609975850c99958972a776c132180abb" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:09.321127 systemd[1]: Started cri-containerd-7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3.scope - libcontainer container 7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3. Jan 15 05:57:09.479183 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 15 05:57:09.479688 kernel: audit: type=1334 audit(1768456629.459:579): prog-id=173 op=LOAD Jan 15 05:57:09.459000 audit: BPF prog-id=173 op=LOAD Jan 15 05:57:09.479700 systemd-networkd[1494]: calie62edd4d7ba: Gained IPv6LL Jan 15 05:57:09.460000 audit: BPF prog-id=174 op=LOAD Jan 15 05:57:09.495600 kernel: audit: type=1334 audit(1768456629.460:580): prog-id=174 op=LOAD Jan 15 05:57:09.495713 kernel: audit: type=1300 audit(1768456629.460:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c8238 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.460000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c8238 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.483714 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:09.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.553468 kernel: audit: type=1327 audit(1768456629.460:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.553595 kernel: audit: type=1334 audit(1768456629.460:581): prog-id=174 op=UNLOAD Jan 15 05:57:09.460000 audit: BPF prog-id=174 op=UNLOAD Jan 15 05:57:09.460000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.645659 kernel: audit: type=1300 audit(1768456629.460:581): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.645926 kernel: audit: type=1327 audit(1768456629.460:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.461000 audit: BPF prog-id=175 op=LOAD Jan 15 05:57:09.701646 kernel: audit: type=1334 audit(1768456629.461:582): prog-id=175 op=LOAD Jan 15 05:57:09.461000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c8488 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.756625 kernel: audit: type=1300 audit(1768456629.461:582): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c8488 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.802953 kernel: audit: type=1327 audit(1768456629.461:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.461000 audit: BPF prog-id=176 op=LOAD Jan 15 05:57:09.461000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001c8218 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.461000 audit: BPF prog-id=176 op=UNLOAD Jan 15 05:57:09.461000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.461000 audit: BPF prog-id=175 op=UNLOAD Jan 15 05:57:09.461000 audit[4446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.461000 audit: BPF prog-id=177 op=LOAD Jan 15 05:57:09.461000 audit[4446]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001c86e8 a2=98 a3=0 items=0 ppid=4432 pid=4446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731313538393362653232333330633832656662613735616639313361 Jan 15 05:57:09.585000 audit: BPF prog-id=178 op=LOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff61134f60 a2=98 a3=1fffffffffffffff items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.585000 audit: BPF prog-id=178 op=UNLOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff61134f30 a3=0 items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.585000 audit: BPF prog-id=179 op=LOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff61134e40 a2=94 a3=3 items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.585000 audit: BPF prog-id=179 op=UNLOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff61134e40 a2=94 a3=3 items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.585000 audit: BPF prog-id=180 op=LOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff61134e80 a2=94 a3=7fff61135060 items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.585000 audit: BPF prog-id=180 op=UNLOAD Jan 15 05:57:09.585000 audit[4492]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff61134e80 a2=94 a3=7fff61135060 items=0 ppid=4326 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.585000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:57:09.603000 audit: BPF prog-id=181 op=LOAD Jan 15 05:57:09.603000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc38912160 a2=98 a3=3 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.603000 audit: BPF prog-id=181 op=UNLOAD Jan 15 05:57:09.603000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc38912130 a3=0 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.603000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.604000 audit: BPF prog-id=182 op=LOAD Jan 15 05:57:09.604000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc38911f50 a2=94 a3=54428f items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.604000 audit: BPF prog-id=182 op=UNLOAD Jan 15 05:57:09.604000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc38911f50 a2=94 a3=54428f items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.604000 audit: BPF prog-id=183 op=LOAD Jan 15 05:57:09.604000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc38911f80 a2=94 a3=2 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.604000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.803000 audit: BPF prog-id=183 op=UNLOAD Jan 15 05:57:09.803000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc38911f80 a2=0 a3=2 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:09.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:09.855981 containerd[1599]: time="2026-01-15T05:57:09.852135859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f7495bcf-nsnsl,Uid:4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"7115893be22330c82efba75af913a732293591d45d67deb2e501857f0eb4e9c3\"" Jan 15 05:57:09.873077 containerd[1599]: time="2026-01-15T05:57:09.872047768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:57:09.968122 containerd[1599]: time="2026-01-15T05:57:09.968042495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:09.978006 containerd[1599]: time="2026-01-15T05:57:09.977930947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:57:09.978588 containerd[1599]: time="2026-01-15T05:57:09.978168103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:09.985476 kubelet[2864]: E0115 05:57:09.983478 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:09.985476 kubelet[2864]: E0115 05:57:09.983677 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:09.985476 kubelet[2864]: E0115 05:57:09.983793 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:09.991705 containerd[1599]: time="2026-01-15T05:57:09.991514658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:57:10.075922 containerd[1599]: time="2026-01-15T05:57:10.074743415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:10.085050 containerd[1599]: time="2026-01-15T05:57:10.084975784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:57:10.085588 containerd[1599]: time="2026-01-15T05:57:10.085556903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:10.091960 kubelet[2864]: E0115 05:57:10.090754 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:10.091960 kubelet[2864]: E0115 05:57:10.090957 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:10.091960 kubelet[2864]: E0115 05:57:10.091069 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:10.091960 kubelet[2864]: E0115 05:57:10.091134 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:10.334194 kubelet[2864]: E0115 05:57:10.334017 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:10.447000 audit[4506]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:10.447000 audit[4506]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf042f450 a2=0 a3=7ffcf042f43c items=0 ppid=3019 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:10.464000 audit[4506]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:10.464000 audit[4506]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf042f450 a2=0 a3=0 items=0 ppid=3019 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.464000 audit: BPF prog-id=184 op=LOAD Jan 15 05:57:10.464000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc38911e40 a2=94 a3=1 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.464000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.465000 audit: BPF prog-id=184 op=UNLOAD Jan 15 05:57:10.465000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc38911e40 a2=94 a3=1 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:10.486000 audit: BPF prog-id=185 op=LOAD Jan 15 05:57:10.486000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc38911e30 a2=94 a3=4 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.486000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.487000 audit: BPF prog-id=185 op=UNLOAD Jan 15 05:57:10.487000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc38911e30 a2=0 a3=4 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.487000 audit: BPF prog-id=186 op=LOAD Jan 15 05:57:10.487000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc38911c90 a2=94 a3=5 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.487000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.488000 audit: BPF prog-id=186 op=UNLOAD Jan 15 05:57:10.488000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc38911c90 a2=0 a3=5 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.488000 audit: BPF prog-id=187 op=LOAD Jan 15 05:57:10.488000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc38911eb0 a2=94 a3=6 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.488000 audit: BPF prog-id=187 op=UNLOAD Jan 15 05:57:10.488000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc38911eb0 a2=0 a3=6 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.488000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.489000 audit: BPF prog-id=188 op=LOAD Jan 15 05:57:10.489000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc38911660 a2=94 a3=88 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.489000 audit: BPF prog-id=189 op=LOAD Jan 15 05:57:10.489000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc389114e0 a2=94 a3=2 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.489000 audit: BPF prog-id=189 op=UNLOAD Jan 15 05:57:10.489000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc38911510 a2=0 a3=7ffc38911610 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.489000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.491000 audit: BPF prog-id=188 op=UNLOAD Jan 15 05:57:10.491000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=32f7ad10 a2=0 a3=d2a1e2e971167d92 items=0 ppid=4326 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.491000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:57:10.544000 audit: BPF prog-id=190 op=LOAD Jan 15 05:57:10.544000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36a6c9c0 a2=98 a3=1999999999999999 items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.544000 audit: BPF prog-id=190 op=UNLOAD Jan 15 05:57:10.544000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe36a6c990 a3=0 items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.544000 audit: BPF prog-id=191 op=LOAD Jan 15 05:57:10.544000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36a6c8a0 a2=94 a3=ffff items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.544000 audit: BPF prog-id=191 op=UNLOAD Jan 15 05:57:10.544000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe36a6c8a0 a2=94 a3=ffff items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.544000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.545000 audit: BPF prog-id=192 op=LOAD Jan 15 05:57:10.545000 audit[4509]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36a6c8e0 a2=94 a3=7ffe36a6cac0 items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.545000 audit: BPF prog-id=192 op=UNLOAD Jan 15 05:57:10.545000 audit[4509]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe36a6c8e0 a2=94 a3=7ffe36a6cac0 items=0 ppid=4326 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:10.545000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:57:10.878487 update_engine[1589]: I20260115 05:57:10.877174 1589 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 15 05:57:10.878487 update_engine[1589]: I20260115 05:57:10.877614 1589 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 15 05:57:10.886089 update_engine[1589]: I20260115 05:57:10.885766 1589 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 15 05:57:10.890705 update_engine[1589]: I20260115 05:57:10.888556 1589 omaha_request_params.cc:62] Current group set to developer Jan 15 05:57:10.891606 update_engine[1589]: I20260115 05:57:10.891566 1589 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 15 05:57:10.891776 update_engine[1589]: I20260115 05:57:10.891748 1589 update_attempter.cc:643] Scheduling an action processor start. Jan 15 05:57:10.892003 update_engine[1589]: I20260115 05:57:10.891981 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 05:57:10.892106 update_engine[1589]: I20260115 05:57:10.892089 1589 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 15 05:57:10.892226 update_engine[1589]: I20260115 05:57:10.892206 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 05:57:10.892754 update_engine[1589]: I20260115 05:57:10.892724 1589 omaha_request_action.cc:272] Request: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.892754 update_engine[1589]: Jan 15 05:57:10.893111 update_engine[1589]: I20260115 05:57:10.893088 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 05:57:10.932616 update_engine[1589]: I20260115 05:57:10.932542 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 05:57:10.938965 update_engine[1589]: I20260115 05:57:10.938895 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 05:57:10.953671 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 15 05:57:10.955752 update_engine[1589]: E20260115 05:57:10.955597 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 05:57:10.955752 update_engine[1589]: I20260115 05:57:10.955710 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 15 05:57:11.092733 systemd-networkd[1494]: vxlan.calico: Link UP Jan 15 05:57:11.092748 systemd-networkd[1494]: vxlan.calico: Gained carrier Jan 15 05:57:11.201000 audit: BPF prog-id=193 op=LOAD Jan 15 05:57:11.201000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc8599830 a2=98 a3=0 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.201000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.209000 audit: BPF prog-id=193 op=UNLOAD Jan 15 05:57:11.209000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffc8599800 a3=0 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.209000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.213000 audit: BPF prog-id=194 op=LOAD Jan 15 05:57:11.213000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc8599640 a2=94 a3=54428f items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.213000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.213000 audit: BPF prog-id=194 op=UNLOAD Jan 15 05:57:11.213000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffc8599640 a2=94 a3=54428f items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.213000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.213000 audit: BPF prog-id=195 op=LOAD Jan 15 05:57:11.213000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc8599670 a2=94 a3=2 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.213000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.213000 audit: BPF prog-id=195 op=UNLOAD Jan 15 05:57:11.213000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffc8599670 a2=0 a3=2 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.213000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.213000 audit: BPF prog-id=196 op=LOAD Jan 15 05:57:11.213000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc8599420 a2=94 a3=4 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.213000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.217000 audit: BPF prog-id=196 op=UNLOAD Jan 15 05:57:11.217000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc8599420 a2=94 a3=4 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.217000 audit: BPF prog-id=197 op=LOAD Jan 15 05:57:11.217000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc8599520 a2=94 a3=7fffc85996a0 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.217000 audit: BPF prog-id=197 op=UNLOAD Jan 15 05:57:11.217000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc8599520 a2=0 a3=7fffc85996a0 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.220000 audit: BPF prog-id=198 op=LOAD Jan 15 05:57:11.220000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc8598c50 a2=94 a3=2 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.220000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.221000 audit: BPF prog-id=198 op=UNLOAD Jan 15 05:57:11.221000 audit[4534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc8598c50 a2=0 a3=2 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.221000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.221000 audit: BPF prog-id=199 op=LOAD Jan 15 05:57:11.221000 audit[4534]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc8598d50 a2=94 a3=30 items=0 ppid=4326 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.221000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:57:11.265000 audit: BPF prog-id=200 op=LOAD Jan 15 05:57:11.265000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdab13b130 a2=98 a3=0 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.265000 audit: BPF prog-id=200 op=UNLOAD Jan 15 05:57:11.265000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdab13b100 a3=0 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.265000 audit: BPF prog-id=201 op=LOAD Jan 15 05:57:11.265000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdab13af20 a2=94 a3=54428f items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.266000 audit: BPF prog-id=201 op=UNLOAD Jan 15 05:57:11.266000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdab13af20 a2=94 a3=54428f items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.266000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.266000 audit: BPF prog-id=202 op=LOAD Jan 15 05:57:11.266000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdab13af50 a2=94 a3=2 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.266000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.266000 audit: BPF prog-id=202 op=UNLOAD Jan 15 05:57:11.266000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdab13af50 a2=0 a3=2 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.266000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.341021 kubelet[2864]: E0115 05:57:11.340919 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:11.802000 audit: BPF prog-id=203 op=LOAD Jan 15 05:57:11.802000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdab13ae10 a2=94 a3=1 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.802000 audit: BPF prog-id=203 op=UNLOAD Jan 15 05:57:11.802000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdab13ae10 a2=94 a3=1 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.824000 audit: BPF prog-id=204 op=LOAD Jan 15 05:57:11.824000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdab13ae00 a2=94 a3=4 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.824000 audit: BPF prog-id=204 op=UNLOAD Jan 15 05:57:11.824000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdab13ae00 a2=0 a3=4 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.824000 audit: BPF prog-id=205 op=LOAD Jan 15 05:57:11.824000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdab13ac60 a2=94 a3=5 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.824000 audit: BPF prog-id=205 op=UNLOAD Jan 15 05:57:11.824000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdab13ac60 a2=0 a3=5 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.824000 audit: BPF prog-id=206 op=LOAD Jan 15 05:57:11.824000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdab13ae80 a2=94 a3=6 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.824000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.826000 audit: BPF prog-id=206 op=UNLOAD Jan 15 05:57:11.826000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdab13ae80 a2=0 a3=6 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.826000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.826000 audit: BPF prog-id=207 op=LOAD Jan 15 05:57:11.826000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdab13a630 a2=94 a3=88 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.826000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.827000 audit: BPF prog-id=208 op=LOAD Jan 15 05:57:11.827000 audit[4544]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdab13a4b0 a2=94 a3=2 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.827000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.827000 audit: BPF prog-id=208 op=UNLOAD Jan 15 05:57:11.827000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdab13a4e0 a2=0 a3=7ffdab13a5e0 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.827000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.828000 audit: BPF prog-id=207 op=UNLOAD Jan 15 05:57:11.828000 audit[4544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=23bead10 a2=0 a3=5c93be70eb84e762 items=0 ppid=4326 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.828000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:57:11.851000 audit: BPF prog-id=199 op=UNLOAD Jan 15 05:57:11.851000 audit[4326]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000a26180 a2=0 a3=0 items=0 ppid=4315 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:11.851000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 05:57:12.187000 audit[4567]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:12.187000 audit[4567]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff80a20300 a2=0 a3=7fff80a202ec items=0 ppid=4326 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:12.187000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:12.209000 audit[4571]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:12.209000 audit[4571]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd6a2109d0 a2=0 a3=7ffd6a2109bc items=0 ppid=4326 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:12.209000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:12.252000 audit[4568]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:12.252000 audit[4568]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcfc0bc6f0 a2=0 a3=7ffcfc0bc6dc items=0 ppid=4326 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:12.252000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:12.258000 audit[4569]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4569 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:12.258000 audit[4569]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe710ed420 a2=0 a3=55f99b037000 items=0 ppid=4326 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:12.258000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:12.359028 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Jan 15 05:57:13.171840 containerd[1599]: time="2026-01-15T05:57:13.171570677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:57:13.184489 containerd[1599]: time="2026-01-15T05:57:13.182688639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:14.042126 systemd-networkd[1494]: calidd8a8933bb7: Link UP Jan 15 05:57:14.046954 systemd-networkd[1494]: calidd8a8933bb7: Gained carrier Jan 15 05:57:14.128912 containerd[1599]: 2026-01-15 05:57:13.553 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0 calico-apiserver-ffcfc74f7- calico-apiserver b9ae406d-9e12-445c-a7c0-69e8063e9379 933 0 2026-01-15 05:56:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ffcfc74f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ffcfc74f7-b2c68 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd8a8933bb7 [] [] }} ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-" Jan 15 05:57:14.128912 containerd[1599]: 2026-01-15 05:57:13.554 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.128912 containerd[1599]: 2026-01-15 05:57:13.800 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" HandleID="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.801 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" HandleID="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000225f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ffcfc74f7-b2c68", "timestamp":"2026-01-15 05:57:13.800064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.801 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.801 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.801 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.889 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" host="localhost" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.917 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.941 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.956 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.967 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:14.129930 containerd[1599]: 2026-01-15 05:57:13.967 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" host="localhost" Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:13.975 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786 Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:13.987 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" host="localhost" Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:14.013 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" host="localhost" Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:14.014 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" host="localhost" Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:14.014 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:14.134021 containerd[1599]: 2026-01-15 05:57:14.014 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" HandleID="k8s-pod-network.085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.135836 containerd[1599]: 2026-01-15 05:57:14.028 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0", GenerateName:"calico-apiserver-ffcfc74f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9ae406d-9e12-445c-a7c0-69e8063e9379", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ffcfc74f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ffcfc74f7-b2c68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd8a8933bb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:14.136175 containerd[1599]: 2026-01-15 05:57:14.029 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.136175 containerd[1599]: 2026-01-15 05:57:14.029 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd8a8933bb7 ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.136175 containerd[1599]: 2026-01-15 05:57:14.050 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.137158 containerd[1599]: 2026-01-15 05:57:14.053 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0", GenerateName:"calico-apiserver-ffcfc74f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b9ae406d-9e12-445c-a7c0-69e8063e9379", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ffcfc74f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786", Pod:"calico-apiserver-ffcfc74f7-b2c68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd8a8933bb7", MAC:"92:ce:cd:09:11:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:14.138626 containerd[1599]: 2026-01-15 05:57:14.113 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-b2c68" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--b2c68-eth0" Jan 15 05:57:14.188083 kubelet[2864]: E0115 05:57:14.187201 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:14.202669 containerd[1599]: time="2026-01-15T05:57:14.197060157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,}" Jan 15 05:57:14.206897 kubelet[2864]: E0115 05:57:14.206868 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:14.226048 containerd[1599]: time="2026-01-15T05:57:14.224566427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:14.226048 containerd[1599]: time="2026-01-15T05:57:14.224890343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,}" Jan 15 05:57:14.247000 audit[4635]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4635 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:14.247000 audit[4635]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff6a36cbe0 a2=0 a3=7fff6a36cbcc items=0 ppid=4326 pid=4635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:14.247000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:14.272533 systemd-networkd[1494]: calic9a03621119: Link UP Jan 15 05:57:14.278853 systemd-networkd[1494]: calic9a03621119: Gained carrier Jan 15 05:57:14.374167 containerd[1599]: 2026-01-15 05:57:13.537 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0 calico-kube-controllers-98d64bddf- calico-system 9f5c6d0a-fde4-4893-b36a-da65165e8843 925 0 2026-01-15 05:56:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:98d64bddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-98d64bddf-vgrjr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic9a03621119 [] [] }} ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-" Jan 15 05:57:14.374167 containerd[1599]: 2026-01-15 05:57:13.540 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.374167 containerd[1599]: 2026-01-15 05:57:13.853 [INFO][4606] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" HandleID="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Workload="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:13.854 [INFO][4606] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" HandleID="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Workload="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cec80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-98d64bddf-vgrjr", "timestamp":"2026-01-15 05:57:13.853831904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:13.854 [INFO][4606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.014 [INFO][4606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.014 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.047 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" host="localhost" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.088 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.139 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.160 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.174 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:14.375057 containerd[1599]: 2026-01-15 05:57:14.176 [INFO][4606] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" host="localhost" Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.186 [INFO][4606] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.217 [INFO][4606] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" host="localhost" Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.245 [INFO][4606] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" host="localhost" Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.245 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" host="localhost" Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.245 [INFO][4606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:14.375874 containerd[1599]: 2026-01-15 05:57:14.246 [INFO][4606] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" HandleID="k8s-pod-network.23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Workload="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.376066 containerd[1599]: 2026-01-15 05:57:14.257 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0", GenerateName:"calico-kube-controllers-98d64bddf-", Namespace:"calico-system", SelfLink:"", UID:"9f5c6d0a-fde4-4893-b36a-da65165e8843", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98d64bddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-98d64bddf-vgrjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9a03621119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:14.376527 containerd[1599]: 2026-01-15 05:57:14.258 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.376527 containerd[1599]: 2026-01-15 05:57:14.259 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9a03621119 ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.376527 containerd[1599]: 2026-01-15 05:57:14.280 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.376646 containerd[1599]: 2026-01-15 05:57:14.283 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0", GenerateName:"calico-kube-controllers-98d64bddf-", Namespace:"calico-system", SelfLink:"", UID:"9f5c6d0a-fde4-4893-b36a-da65165e8843", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98d64bddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb", Pod:"calico-kube-controllers-98d64bddf-vgrjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9a03621119", MAC:"e2:9b:32:b1:77:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:14.377015 containerd[1599]: 2026-01-15 05:57:14.325 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" Namespace="calico-system" Pod="calico-kube-controllers-98d64bddf-vgrjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98d64bddf--vgrjr-eth0" Jan 15 05:57:14.454047 containerd[1599]: time="2026-01-15T05:57:14.452173492Z" level=info msg="connecting to shim 085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786" address="unix:///run/containerd/s/35deabb86430a405953f2e11615edf3fab24b21d4c62d8dc471fb8cd09335b42" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:14.634000 audit[4716]: NETFILTER_CFG table=filter:126 family=2 entries=40 op=nft_register_chain pid=4716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:14.646525 kernel: kauditd_printk_skb: 219 callbacks suppressed Jan 15 05:57:14.646621 kernel: audit: type=1325 audit(1768456634.634:656): table=filter:126 family=2 entries=40 op=nft_register_chain pid=4716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:14.634000 audit[4716]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fff48be27e0 a2=0 a3=7fff48be27cc items=0 ppid=4326 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:14.712158 containerd[1599]: time="2026-01-15T05:57:14.673681126Z" level=info msg="connecting to shim 23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb" address="unix:///run/containerd/s/a9039c6cbb1f18b2c5fa4d5c522d46bfa0924e6099bd5972a82161c672060d7b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:14.735934 kernel: audit: type=1300 audit(1768456634.634:656): arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fff48be27e0 a2=0 a3=7fff48be27cc items=0 ppid=4326 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:14.634000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:14.774131 kernel: audit: type=1327 audit(1768456634.634:656): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:14.882805 systemd[1]: Started cri-containerd-085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786.scope - libcontainer container 085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786. Jan 15 05:57:15.026000 audit: BPF prog-id=209 op=LOAD Jan 15 05:57:15.042655 kernel: audit: type=1334 audit(1768456635.026:657): prog-id=209 op=LOAD Jan 15 05:57:15.032000 audit: BPF prog-id=210 op=LOAD Jan 15 05:57:15.044005 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:15.057583 kernel: audit: type=1334 audit(1768456635.032:658): prog-id=210 op=LOAD Jan 15 05:57:15.032000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.114039 kernel: audit: type=1300 audit(1768456635.032:658): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.114168 kernel: audit: type=1327 audit(1768456635.032:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.032000 audit: BPF prog-id=210 op=UNLOAD Jan 15 05:57:15.165169 systemd[1]: Started cri-containerd-23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb.scope - libcontainer container 23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb. Jan 15 05:57:15.180958 kernel: audit: type=1334 audit(1768456635.032:659): prog-id=210 op=UNLOAD Jan 15 05:57:15.032000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.188640 containerd[1599]: time="2026-01-15T05:57:15.188210497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:57:15.238645 kernel: audit: type=1300 audit(1768456635.032:659): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.239590 kernel: audit: type=1327 audit(1768456635.032:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.037000 audit: BPF prog-id=211 op=LOAD Jan 15 05:57:15.037000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.037000 audit: BPF prog-id=212 op=LOAD Jan 15 05:57:15.037000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.037000 audit: BPF prog-id=212 op=UNLOAD Jan 15 05:57:15.037000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.037000 audit: BPF prog-id=211 op=UNLOAD Jan 15 05:57:15.037000 audit[4718]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.037000 audit: BPF prog-id=213 op=LOAD Jan 15 05:57:15.037000 audit[4718]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=4688 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038356564363763613336333636363164313063326364366133616633 Jan 15 05:57:15.286000 audit: BPF prog-id=214 op=LOAD Jan 15 05:57:15.290000 audit: BPF prog-id=215 op=LOAD Jan 15 05:57:15.290000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d2238 a2=98 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.290000 audit: BPF prog-id=215 op=UNLOAD Jan 15 05:57:15.290000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.292000 audit: BPF prog-id=216 op=LOAD Jan 15 05:57:15.292000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d2488 a2=98 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.293000 audit: BPF prog-id=217 op=LOAD Jan 15 05:57:15.293000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001d2218 a2=98 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.293000 audit: BPF prog-id=217 op=UNLOAD Jan 15 05:57:15.293000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.294000 audit: BPF prog-id=216 op=UNLOAD Jan 15 05:57:15.294000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.294000 audit: BPF prog-id=218 op=LOAD Jan 15 05:57:15.294000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d26e8 a2=98 a3=0 items=0 ppid=4725 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616531343366353032333766393632613034643533653763333665 Jan 15 05:57:15.310833 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:15.401512 containerd[1599]: time="2026-01-15T05:57:15.400226312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-b2c68,Uid:b9ae406d-9e12-445c-a7c0-69e8063e9379,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"085ed67ca3636661d10c2cd6a3af344396019bde859a568e77612256862b8786\"" Jan 15 05:57:15.415638 containerd[1599]: time="2026-01-15T05:57:15.414588465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:15.419666 systemd-networkd[1494]: cali70889d3f285: Link UP Jan 15 05:57:15.432958 systemd-networkd[1494]: cali70889d3f285: Gained carrier Jan 15 05:57:15.596029 containerd[1599]: 2026-01-15 05:57:14.594 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--nxntl-eth0 goldmane-7c778bb748- calico-system 125448ce-e54b-4cc3-923a-6bb87264173b 927 0 2026-01-15 05:56:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-nxntl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali70889d3f285 [] [] }} ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-" Jan 15 05:57:15.596029 containerd[1599]: 2026-01-15 05:57:14.598 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.596029 containerd[1599]: 2026-01-15 05:57:14.948 [INFO][4732] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" HandleID="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Workload="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:14.956 [INFO][4732] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" HandleID="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Workload="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000515140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-nxntl", "timestamp":"2026-01-15 05:57:14.948972401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:14.959 [INFO][4732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:14.961 [INFO][4732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:14.961 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.016 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" host="localhost" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.061 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.201 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.270 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.307 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:15.597619 containerd[1599]: 2026-01-15 05:57:15.307 [INFO][4732] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" host="localhost" Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.318 [INFO][4732] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.329 [INFO][4732] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" host="localhost" Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.379 [INFO][4732] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" host="localhost" Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.380 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" host="localhost" Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.380 [INFO][4732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:15.600619 containerd[1599]: 2026-01-15 05:57:15.380 [INFO][4732] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" HandleID="k8s-pod-network.7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Workload="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.601613 containerd[1599]: 2026-01-15 05:57:15.386 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--nxntl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"125448ce-e54b-4cc3-923a-6bb87264173b", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-nxntl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70889d3f285", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:15.601613 containerd[1599]: 2026-01-15 05:57:15.386 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.602037 containerd[1599]: 2026-01-15 05:57:15.386 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70889d3f285 ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.602037 containerd[1599]: 2026-01-15 05:57:15.432 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.602078 containerd[1599]: 2026-01-15 05:57:15.440 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--nxntl-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"125448ce-e54b-4cc3-923a-6bb87264173b", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b", Pod:"goldmane-7c778bb748-nxntl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70889d3f285", MAC:"aa:81:46:eb:a7:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:15.604127 containerd[1599]: 2026-01-15 05:57:15.538 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" Namespace="calico-system" Pod="goldmane-7c778bb748-nxntl" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--nxntl-eth0" Jan 15 05:57:15.621023 containerd[1599]: time="2026-01-15T05:57:15.620535588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:15.648507 containerd[1599]: time="2026-01-15T05:57:15.648018804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:15.648507 containerd[1599]: time="2026-01-15T05:57:15.648479100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:15.648869 kubelet[2864]: E0115 05:57:15.648819 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:15.651475 kubelet[2864]: E0115 05:57:15.649989 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:15.651475 kubelet[2864]: E0115 05:57:15.650102 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:15.651475 kubelet[2864]: E0115 05:57:15.650147 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:15.816000 audit[4832]: NETFILTER_CFG table=filter:127 family=2 entries=52 op=nft_register_chain pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:15.816000 audit[4832]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7fff42072360 a2=0 a3=7fff4207234c items=0 ppid=4326 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:15.816000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:15.891222 containerd[1599]: time="2026-01-15T05:57:15.889662172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98d64bddf-vgrjr,Uid:9f5c6d0a-fde4-4893-b36a-da65165e8843,Namespace:calico-system,Attempt:0,} returns sandbox id \"23ae143f50237f962a04d53e7c36e6bb09d19ab652295af2b8d344098fd3bccb\"" Jan 15 05:57:15.936111 systemd-networkd[1494]: caliddce50a3304: Link UP Jan 15 05:57:15.938986 systemd-networkd[1494]: caliddce50a3304: Gained carrier Jan 15 05:57:15.943214 systemd-networkd[1494]: calidd8a8933bb7: Gained IPv6LL Jan 15 05:57:15.975541 containerd[1599]: time="2026-01-15T05:57:15.965529892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:57:15.975541 containerd[1599]: time="2026-01-15T05:57:15.966152558Z" level=info msg="connecting to shim 7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b" address="unix:///run/containerd/s/95106bba8f41640e4e438ea197ba5e1016aa7b490d7555f3807b0848a3fce1ae" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:16.006888 systemd-networkd[1494]: calic9a03621119: Gained IPv6LL Jan 15 05:57:16.089451 containerd[1599]: 2026-01-15 05:57:14.501 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--9hdpt-eth0 coredns-66bc5c9577- kube-system 28dbae26-ae3c-40cb-b52b-26db1f4b6ea2 923 0 2026-01-15 05:55:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-9hdpt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliddce50a3304 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-" Jan 15 05:57:16.089451 containerd[1599]: 2026-01-15 05:57:14.519 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.089451 containerd[1599]: 2026-01-15 05:57:15.016 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" HandleID="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Workload="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.016 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" HandleID="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Workload="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-9hdpt", "timestamp":"2026-01-15 05:57:15.015998567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.017 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.385 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.386 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.446 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" host="localhost" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.585 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.621 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.636 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.716 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.090050 containerd[1599]: 2026-01-15 05:57:15.741 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" host="localhost" Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.791 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91 Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.830 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" host="localhost" Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.878 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" host="localhost" Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.879 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" host="localhost" Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.889 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:16.096878 containerd[1599]: 2026-01-15 05:57:15.889 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" HandleID="k8s-pod-network.807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Workload="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:15.916 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9hdpt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"28dbae26-ae3c-40cb-b52b-26db1f4b6ea2", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-9hdpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddce50a3304", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:15.917 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:15.917 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddce50a3304 ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:15.947 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:15.959 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--9hdpt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"28dbae26-ae3c-40cb-b52b-26db1f4b6ea2", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91", Pod:"coredns-66bc5c9577-9hdpt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddce50a3304", MAC:"4e:3d:14:63:32:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:16.097070 containerd[1599]: 2026-01-15 05:57:16.029 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" Namespace="kube-system" Pod="coredns-66bc5c9577-9hdpt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--9hdpt-eth0" Jan 15 05:57:16.168785 containerd[1599]: time="2026-01-15T05:57:16.164558947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:16.205571 containerd[1599]: time="2026-01-15T05:57:16.205165129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:57:16.210185 containerd[1599]: time="2026-01-15T05:57:16.208904817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:16.210838 kubelet[2864]: E0115 05:57:16.209893 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:16.210838 kubelet[2864]: E0115 05:57:16.209953 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:16.210838 kubelet[2864]: E0115 05:57:16.210048 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:16.210838 kubelet[2864]: E0115 05:57:16.210095 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:16.273000 audit[4873]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:16.273000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7ffd592a5240 a2=0 a3=7ffd592a522c items=0 ppid=4326 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:16.355077 containerd[1599]: time="2026-01-15T05:57:16.354965874Z" level=info msg="connecting to shim 807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91" address="unix:///run/containerd/s/e8cf0998a1f5ec77fb6fe289688417d19502d849d8a835b923bfadbc57e98380" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:16.377548 systemd-networkd[1494]: cali71ee8645b81: Link UP Jan 15 05:57:16.390223 systemd-networkd[1494]: cali71ee8645b81: Gained carrier Jan 15 05:57:16.391155 systemd[1]: Started cri-containerd-7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b.scope - libcontainer container 7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b. Jan 15 05:57:16.443435 kubelet[2864]: E0115 05:57:16.437885 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:14.857 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--lm2t2-eth0 coredns-66bc5c9577- kube-system f1681ef8-d92a-4410-95a3-be947ed6bc57 924 0 2026-01-15 05:55:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-lm2t2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71ee8645b81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:14.863 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:15.160 [INFO][4760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" HandleID="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Workload="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:15.170 [INFO][4760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" HandleID="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Workload="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-lm2t2", "timestamp":"2026-01-15 05:57:15.160826574 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:15.170 [INFO][4760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:15.880 [INFO][4760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:15.880 [INFO][4760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.051 [INFO][4760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.112 [INFO][4760] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.196 [INFO][4760] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.241 [INFO][4760] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.268 [INFO][4760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.271 [INFO][4760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.277 [INFO][4760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642 Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.305 [INFO][4760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.337 [INFO][4760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.337 [INFO][4760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" host="localhost" Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.337 [INFO][4760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:16.527182 containerd[1599]: 2026-01-15 05:57:16.337 [INFO][4760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" HandleID="k8s-pod-network.7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Workload="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.356 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lm2t2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f1681ef8-d92a-4410-95a3-be947ed6bc57", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-lm2t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71ee8645b81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.356 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.356 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71ee8645b81 ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.389 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.391 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lm2t2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f1681ef8-d92a-4410-95a3-be947ed6bc57", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642", Pod:"coredns-66bc5c9577-lm2t2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71ee8645b81", MAC:"2e:86:bd:db:18:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:16.535519 containerd[1599]: 2026-01-15 05:57:16.453 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" Namespace="kube-system" Pod="coredns-66bc5c9577-lm2t2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lm2t2-eth0" Jan 15 05:57:16.650207 kubelet[2864]: E0115 05:57:16.649222 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:16.670000 audit: BPF prog-id=219 op=LOAD Jan 15 05:57:16.680000 audit: BPF prog-id=220 op=LOAD Jan 15 05:57:16.680000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.680000 audit: BPF prog-id=220 op=UNLOAD Jan 15 05:57:16.680000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.685000 audit: BPF prog-id=221 op=LOAD Jan 15 05:57:16.685000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.685000 audit: BPF prog-id=222 op=LOAD Jan 15 05:57:16.685000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.685000 audit: BPF prog-id=222 op=UNLOAD Jan 15 05:57:16.685000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.685000 audit: BPF prog-id=221 op=UNLOAD Jan 15 05:57:16.685000 audit[4872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.685000 audit: BPF prog-id=223 op=LOAD Jan 15 05:57:16.685000 audit[4872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4846 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623736643233303964396438326434393736633937646230313966 Jan 15 05:57:16.699937 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:16.711808 systemd-networkd[1494]: cali70889d3f285: Gained IPv6LL Jan 15 05:57:16.769000 audit[4933]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:16.769000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7f6fa130 a2=0 a3=7ffe7f6fa11c items=0 ppid=3019 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:16.778000 audit[4933]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:16.778000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe7f6fa130 a2=0 a3=0 items=0 ppid=3019 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:16.837835 containerd[1599]: time="2026-01-15T05:57:16.837778229Z" level=info msg="connecting to shim 7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642" address="unix:///run/containerd/s/66259f403e37b8bb996cb5bbc84714d3f68c130c9df124648922d0586562495e" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:16.850161 systemd[1]: Started cri-containerd-807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91.scope - libcontainer container 807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91. Jan 15 05:57:16.862825 systemd-networkd[1494]: cali6aab8b65c1e: Link UP Jan 15 05:57:16.868977 systemd-networkd[1494]: cali6aab8b65c1e: Gained carrier Jan 15 05:57:16.942000 audit[4969]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4969 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:16.942000 audit[4969]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffc53b8c6e0 a2=0 a3=7ffc53b8c6cc items=0 ppid=4326 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.942000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:15.929 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0 calico-apiserver-ffcfc74f7- calico-apiserver 759e03fd-9efa-4510-b2ed-62c16a4c2e13 926 0 2026-01-15 05:56:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ffcfc74f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ffcfc74f7-h9kdb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6aab8b65c1e [] [] }} ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:15.930 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.384 [INFO][4857] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" HandleID="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.387 [INFO][4857] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" HandleID="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000370090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ffcfc74f7-h9kdb", "timestamp":"2026-01-15 05:57:16.384562962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.387 [INFO][4857] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.387 [INFO][4857] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.387 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.464 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.599 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.730 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.749 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.766 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.766 [INFO][4857] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.781 [INFO][4857] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.797 [INFO][4857] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.825 [INFO][4857] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.827 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" host="localhost" Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.827 [INFO][4857] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:16.964043 containerd[1599]: 2026-01-15 05:57:16.829 [INFO][4857] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" HandleID="k8s-pod-network.0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Workload="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:16.974000 audit: BPF prog-id=224 op=LOAD Jan 15 05:57:16.975000 audit: BPF prog-id=225 op=LOAD Jan 15 05:57:16.975000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.975000 audit: BPF prog-id=225 op=UNLOAD Jan 15 05:57:16.975000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.975000 audit: BPF prog-id=226 op=LOAD Jan 15 05:57:16.975000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.976000 audit: BPF prog-id=227 op=LOAD Jan 15 05:57:16.976000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.976000 audit: BPF prog-id=227 op=UNLOAD Jan 15 05:57:16.976000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.976000 audit: BPF prog-id=226 op=UNLOAD Jan 15 05:57:16.976000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.976000 audit: BPF prog-id=228 op=LOAD Jan 15 05:57:16.976000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4893 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:16.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830376165643137623137333233353865616364646235356335663131 Jan 15 05:57:16.982922 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.840 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0", GenerateName:"calico-apiserver-ffcfc74f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"759e03fd-9efa-4510-b2ed-62c16a4c2e13", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ffcfc74f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ffcfc74f7-h9kdb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aab8b65c1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.840 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.840 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aab8b65c1e ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.880 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.886 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0", GenerateName:"calico-apiserver-ffcfc74f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"759e03fd-9efa-4510-b2ed-62c16a4c2e13", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ffcfc74f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de", Pod:"calico-apiserver-ffcfc74f7-h9kdb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aab8b65c1e", MAC:"be:98:43:e2:44:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:17.035148 containerd[1599]: 2026-01-15 05:57:16.937 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" Namespace="calico-apiserver" Pod="calico-apiserver-ffcfc74f7-h9kdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--ffcfc74f7--h9kdb-eth0" Jan 15 05:57:17.119421 containerd[1599]: time="2026-01-15T05:57:17.118540728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-nxntl,Uid:125448ce-e54b-4cc3-923a-6bb87264173b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7cb76d2309d9d82d4976c97db019fa5ef3c7bc9ce454f7e88917c3903111890b\"" Jan 15 05:57:17.135154 containerd[1599]: time="2026-01-15T05:57:17.134929261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:57:17.176025 systemd[1]: Started cri-containerd-7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642.scope - libcontainer container 7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642. Jan 15 05:57:17.178060 containerd[1599]: time="2026-01-15T05:57:17.177917755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hdpt,Uid:28dbae26-ae3c-40cb-b52b-26db1f4b6ea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91\"" Jan 15 05:57:17.179463 containerd[1599]: time="2026-01-15T05:57:17.179056020Z" level=info msg="connecting to shim 0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de" address="unix:///run/containerd/s/bc62bd8ec9e454dd69d4efcfcaf5ab2d1e78ee5f16f85d35c284cd30c0491672" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:17.180109 containerd[1599]: time="2026-01-15T05:57:17.179887199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,}" Jan 15 05:57:17.185911 kubelet[2864]: E0115 05:57:17.185851 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:17.228769 systemd-networkd[1494]: caliddce50a3304: Gained IPv6LL Jan 15 05:57:17.244773 containerd[1599]: time="2026-01-15T05:57:17.244625457Z" level=info msg="CreateContainer within sandbox \"807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:57:17.251475 containerd[1599]: time="2026-01-15T05:57:17.251449171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:17.281000 audit: BPF prog-id=229 op=LOAD Jan 15 05:57:17.286604 containerd[1599]: time="2026-01-15T05:57:17.257465721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:57:17.286000 audit: BPF prog-id=230 op=LOAD Jan 15 05:57:17.286000 audit[4972]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.286000 audit: BPF prog-id=230 op=UNLOAD Jan 15 05:57:17.286000 audit[4972]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.287000 audit: BPF prog-id=231 op=LOAD Jan 15 05:57:17.287000 audit[4972]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.287000 audit: BPF prog-id=232 op=LOAD Jan 15 05:57:17.287000 audit[4972]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.287000 audit: BPF prog-id=232 op=UNLOAD Jan 15 05:57:17.287000 audit[4972]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.287000 audit: BPF prog-id=231 op=UNLOAD Jan 15 05:57:17.287000 audit[4972]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.287000 audit: BPF prog-id=233 op=LOAD Jan 15 05:57:17.287000 audit[4972]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4950 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763636235373731313138366435306234643337323733363233326332 Jan 15 05:57:17.290843 containerd[1599]: time="2026-01-15T05:57:17.259994512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:17.290891 kubelet[2864]: E0115 05:57:17.287132 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:17.290891 kubelet[2864]: E0115 05:57:17.287174 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:17.290891 kubelet[2864]: E0115 05:57:17.287429 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:17.290891 kubelet[2864]: E0115 05:57:17.287563 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:17.309804 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:17.313800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086871054.mount: Deactivated successfully. Jan 15 05:57:17.320438 containerd[1599]: time="2026-01-15T05:57:17.319960821Z" level=info msg="Container 6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:57:17.345729 containerd[1599]: time="2026-01-15T05:57:17.345501624Z" level=info msg="CreateContainer within sandbox \"807aed17b1732358eacddb55c5f117609cb7bddea7769e6090eb014650e70b91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd\"" Jan 15 05:57:17.355801 containerd[1599]: time="2026-01-15T05:57:17.355759966Z" level=info msg="StartContainer for \"6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd\"" Jan 15 05:57:17.355000 audit[5030]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=5030 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:17.355000 audit[5030]: SYSCALL arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7ffd8fe81470 a2=0 a3=7ffd8fe8145c items=0 ppid=4326 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.355000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:17.362494 containerd[1599]: time="2026-01-15T05:57:17.362042137Z" level=info msg="connecting to shim 6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd" address="unix:///run/containerd/s/e8cf0998a1f5ec77fb6fe289688417d19502d849d8a835b923bfadbc57e98380" protocol=ttrpc version=3 Jan 15 05:57:17.404921 systemd[1]: Started cri-containerd-0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de.scope - libcontainer container 0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de. Jan 15 05:57:17.472551 systemd[1]: Started cri-containerd-6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd.scope - libcontainer container 6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd. Jan 15 05:57:17.492963 containerd[1599]: time="2026-01-15T05:57:17.492576529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lm2t2,Uid:f1681ef8-d92a-4410-95a3-be947ed6bc57,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642\"" Jan 15 05:57:17.496616 kubelet[2864]: E0115 05:57:17.496168 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:17.522088 containerd[1599]: time="2026-01-15T05:57:17.521852493Z" level=info msg="CreateContainer within sandbox \"7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:57:17.538759 kubelet[2864]: E0115 05:57:17.538602 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:17.550000 audit: BPF prog-id=234 op=LOAD Jan 15 05:57:17.551793 kubelet[2864]: E0115 05:57:17.550921 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:17.551793 kubelet[2864]: E0115 05:57:17.551114 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:17.554000 audit: BPF prog-id=235 op=LOAD Jan 15 05:57:17.557000 audit: BPF prog-id=236 op=LOAD Jan 15 05:57:17.559000 audit: BPF prog-id=237 op=LOAD Jan 15 05:57:17.559000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.559000 audit: BPF prog-id=237 op=UNLOAD Jan 15 05:57:17.559000 audit[5035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.554000 audit[5053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.560000 audit: BPF prog-id=235 op=UNLOAD Jan 15 05:57:17.560000 audit[5053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.561000 audit: BPF prog-id=238 op=LOAD Jan 15 05:57:17.561000 audit[5053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.561000 audit: BPF prog-id=239 op=LOAD Jan 15 05:57:17.561000 audit[5053]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000194218 a2=98 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.561000 audit: BPF prog-id=239 op=UNLOAD Jan 15 05:57:17.561000 audit[5053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.561000 audit: BPF prog-id=238 op=UNLOAD Jan 15 05:57:17.561000 audit[5053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.561000 audit: BPF prog-id=240 op=LOAD Jan 15 05:57:17.562000 audit: BPF prog-id=241 op=LOAD Jan 15 05:57:17.562000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.562000 audit: BPF prog-id=242 op=LOAD Jan 15 05:57:17.562000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.562000 audit: BPF prog-id=242 op=UNLOAD Jan 15 05:57:17.562000 audit[5035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.562000 audit: BPF prog-id=241 op=UNLOAD Jan 15 05:57:17.562000 audit[5035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.562000 audit: BPF prog-id=243 op=LOAD Jan 15 05:57:17.561000 audit[5053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001946e8 a2=98 a3=0 items=0 ppid=4893 pid=5053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634353933393533383032353266383338333337633435326338613966 Jan 15 05:57:17.562000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5013 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063383966613832633331626665636364323539653365666538326564 Jan 15 05:57:17.581872 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:17.658452 containerd[1599]: time="2026-01-15T05:57:17.656854603Z" level=info msg="Container 6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:57:17.700493 containerd[1599]: time="2026-01-15T05:57:17.700443410Z" level=info msg="CreateContainer within sandbox \"7ccb57711186d50b4d372736232c2757193f07aebf2ed9727f2de97aefa9d642\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600\"" Jan 15 05:57:17.704805 containerd[1599]: time="2026-01-15T05:57:17.704112621Z" level=info msg="StartContainer for \"6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600\"" Jan 15 05:57:17.716989 containerd[1599]: time="2026-01-15T05:57:17.716965699Z" level=info msg="connecting to shim 6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600" address="unix:///run/containerd/s/66259f403e37b8bb996cb5bbc84714d3f68c130c9df124648922d0586562495e" protocol=ttrpc version=3 Jan 15 05:57:17.822740 systemd[1]: Started cri-containerd-6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600.scope - libcontainer container 6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600. Jan 15 05:57:17.853120 containerd[1599]: time="2026-01-15T05:57:17.852763631Z" level=info msg="StartContainer for \"6459395380252f838337c452c8a9f599bc7f4c0f922368938ee10c2ab1f239dd\" returns successfully" Jan 15 05:57:17.942000 audit: BPF prog-id=244 op=LOAD Jan 15 05:57:17.948000 audit: BPF prog-id=245 op=LOAD Jan 15 05:57:17.948000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=245 op=UNLOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=246 op=LOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168488 a2=98 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=247 op=LOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000168218 a2=98 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=247 op=UNLOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=246 op=UNLOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.949000 audit: BPF prog-id=248 op=LOAD Jan 15 05:57:17.949000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001686e8 a2=98 a3=0 items=0 ppid=4950 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663762373630393037623931653431323030646561376538363033 Jan 15 05:57:17.958000 audit[5143]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5143 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:17.958000 audit[5143]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9a2d7aa0 a2=0 a3=7ffd9a2d7a8c items=0 ppid=3019 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:17.968000 audit[5143]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5143 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:17.968000 audit[5143]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd9a2d7aa0 a2=0 a3=0 items=0 ppid=3019 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:17.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:18.041992 containerd[1599]: time="2026-01-15T05:57:18.041952975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ffcfc74f7-h9kdb,Uid:759e03fd-9efa-4510-b2ed-62c16a4c2e13,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0c89fa82c31bfeccd259e3efe82edfe3f1299c530f7edcba8ecea4f8597449de\"" Jan 15 05:57:18.053771 containerd[1599]: time="2026-01-15T05:57:18.052780300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:18.231783 systemd-networkd[1494]: cali44e67c2a76b: Link UP Jan 15 05:57:18.235884 systemd-networkd[1494]: cali44e67c2a76b: Gained carrier Jan 15 05:57:18.242604 containerd[1599]: time="2026-01-15T05:57:18.240604558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:18.247838 systemd-networkd[1494]: cali71ee8645b81: Gained IPv6LL Jan 15 05:57:18.258855 containerd[1599]: time="2026-01-15T05:57:18.255150677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:18.258855 containerd[1599]: time="2026-01-15T05:57:18.255481290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:18.259795 kubelet[2864]: E0115 05:57:18.255996 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:18.259795 kubelet[2864]: E0115 05:57:18.256070 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:18.259795 kubelet[2864]: E0115 05:57:18.256173 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:18.259795 kubelet[2864]: E0115 05:57:18.256217 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:18.303129 containerd[1599]: time="2026-01-15T05:57:18.301024262Z" level=info msg="StartContainer for \"6ef7b760907b91e41200dea7e8603fbe6c63da62abc2996a037ef9d94194a600\" returns successfully" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.638 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--glvpn-eth0 csi-node-driver- calico-system 94de96e0-d8e2-4380-a60f-000b8e6b1786 779 0 2026-01-15 05:56:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-glvpn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali44e67c2a76b [] [] }} ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.641 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.937 [INFO][5100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" HandleID="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Workload="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.938 [INFO][5100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" HandleID="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Workload="localhost-k8s-csi--node--driver--glvpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d7450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-glvpn", "timestamp":"2026-01-15 05:57:17.937226793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.938 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.938 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.938 [INFO][5100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.979 [INFO][5100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:17.998 [INFO][5100] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.028 [INFO][5100] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.041 [INFO][5100] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.054 [INFO][5100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.055 [INFO][5100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.070 [INFO][5100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.119 [INFO][5100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.186 [INFO][5100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.187 [INFO][5100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" host="localhost" Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.187 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:57:18.318937 containerd[1599]: 2026-01-15 05:57:18.187 [INFO][5100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" HandleID="k8s-pod-network.6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Workload="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.214 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--glvpn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94de96e0-d8e2-4380-a60f-000b8e6b1786", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-glvpn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e67c2a76b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.220 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.220 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44e67c2a76b ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.243 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.245 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--glvpn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94de96e0-d8e2-4380-a60f-000b8e6b1786", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc", Pod:"csi-node-driver-glvpn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali44e67c2a76b", MAC:"36:f5:34:5d:49:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:57:18.324789 containerd[1599]: 2026-01-15 05:57:18.311 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" Namespace="calico-system" Pod="csi-node-driver-glvpn" WorkloadEndpoint="localhost-k8s-csi--node--driver--glvpn-eth0" Jan 15 05:57:18.375015 systemd-networkd[1494]: cali6aab8b65c1e: Gained IPv6LL Jan 15 05:57:18.503504 containerd[1599]: time="2026-01-15T05:57:18.503003109Z" level=info msg="connecting to shim 6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc" address="unix:///run/containerd/s/21db5f5a716b1348246ee766f7566944945c9c2267b51cb2d8b3b1ab3e27e339" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:57:18.559612 kubelet[2864]: E0115 05:57:18.559042 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:18.592946 kubelet[2864]: E0115 05:57:18.591788 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:18.668144 kubelet[2864]: E0115 05:57:18.666551 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:18.675121 kubelet[2864]: E0115 05:57:18.673594 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:18.719880 kubelet[2864]: I0115 05:57:18.718762 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lm2t2" podStartSLOduration=105.718744042 podStartE2EDuration="1m45.718744042s" podCreationTimestamp="2026-01-15 05:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:57:18.718115057 +0000 UTC m=+109.409031125" watchObservedRunningTime="2026-01-15 05:57:18.718744042 +0000 UTC m=+109.409660100" Jan 15 05:57:18.820000 audit[5182]: NETFILTER_CFG table=filter:135 family=2 entries=52 op=nft_register_chain pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:57:18.820000 audit[5182]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffec3daf6f0 a2=0 a3=7ffec3daf6dc items=0 ppid=4326 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.820000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:57:18.843074 systemd[1]: Started cri-containerd-6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc.scope - libcontainer container 6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc. Jan 15 05:57:18.861795 kubelet[2864]: I0115 05:57:18.861040 2864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9hdpt" podStartSLOduration=105.861021683 podStartE2EDuration="1m45.861021683s" podCreationTimestamp="2026-01-15 05:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:57:18.860857691 +0000 UTC m=+109.551773749" watchObservedRunningTime="2026-01-15 05:57:18.861021683 +0000 UTC m=+109.551937742" Jan 15 05:57:18.959000 audit: BPF prog-id=249 op=LOAD Jan 15 05:57:18.962000 audit: BPF prog-id=250 op=LOAD Jan 15 05:57:18.962000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000144238 a2=98 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.962000 audit: BPF prog-id=250 op=UNLOAD Jan 15 05:57:18.962000 audit[5193]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.963000 audit: BPF prog-id=251 op=LOAD Jan 15 05:57:18.963000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000144488 a2=98 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.963000 audit: BPF prog-id=252 op=LOAD Jan 15 05:57:18.963000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000144218 a2=98 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.965000 audit: BPF prog-id=252 op=UNLOAD Jan 15 05:57:18.965000 audit[5193]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.966000 audit: BPF prog-id=251 op=UNLOAD Jan 15 05:57:18.966000 audit[5193]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.966000 audit: BPF prog-id=253 op=LOAD Jan 15 05:57:18.966000 audit[5193]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001446e8 a2=98 a3=0 items=0 ppid=5181 pid=5193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:18.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363437313263613566363739613934643133363736663262376263 Jan 15 05:57:18.983156 systemd-resolved[1289]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:57:19.085444 containerd[1599]: time="2026-01-15T05:57:19.084854504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glvpn,Uid:94de96e0-d8e2-4380-a60f-000b8e6b1786,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b64712ca5f679a94d13676f2b7bc3fa9c0c0fa9c6cefe9dbe17ab17b4f5a6cc\"" Jan 15 05:57:19.096722 containerd[1599]: time="2026-01-15T05:57:19.095903485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:57:19.099000 audit[5220]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:19.099000 audit[5220]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4de832f0 a2=0 a3=7fff4de832dc items=0 ppid=3019 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:19.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:19.141000 audit[5220]: NETFILTER_CFG table=nat:137 family=2 entries=47 op=nft_register_chain pid=5220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:57:19.141000 audit[5220]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff4de832f0 a2=0 a3=7fff4de832dc items=0 ppid=3019 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:19.141000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:57:19.189146 containerd[1599]: time="2026-01-15T05:57:19.189092879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:19.195858 containerd[1599]: time="2026-01-15T05:57:19.195591667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:57:19.195858 containerd[1599]: time="2026-01-15T05:57:19.195826420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:19.205901 kubelet[2864]: E0115 05:57:19.205568 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:19.206205 kubelet[2864]: E0115 05:57:19.206013 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:19.206205 kubelet[2864]: E0115 05:57:19.206127 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:19.213770 containerd[1599]: time="2026-01-15T05:57:19.211201698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:57:19.387897 containerd[1599]: time="2026-01-15T05:57:19.387068123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:19.394814 containerd[1599]: time="2026-01-15T05:57:19.393854945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:57:19.394814 containerd[1599]: time="2026-01-15T05:57:19.393962563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:19.394942 kubelet[2864]: E0115 05:57:19.394538 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:19.394942 kubelet[2864]: E0115 05:57:19.394593 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:19.395893 kubelet[2864]: E0115 05:57:19.394992 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:19.395893 kubelet[2864]: E0115 05:57:19.395031 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:19.681802 kubelet[2864]: E0115 05:57:19.680178 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:19.681802 kubelet[2864]: E0115 05:57:19.681524 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:19.687464 kubelet[2864]: E0115 05:57:19.687434 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:19.692720 kubelet[2864]: E0115 05:57:19.691521 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:20.103186 systemd-networkd[1494]: cali44e67c2a76b: Gained IPv6LL Jan 15 05:57:20.688920 kubelet[2864]: E0115 05:57:20.686760 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:20.688920 kubelet[2864]: E0115 05:57:20.688694 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:20.698217 kubelet[2864]: E0115 05:57:20.698102 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:21.696380 update_engine[1589]: I20260115 05:57:21.691969 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 05:57:21.696989 update_engine[1589]: I20260115 05:57:21.696457 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 05:57:21.697910 update_engine[1589]: I20260115 05:57:21.697757 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 05:57:21.714532 update_engine[1589]: E20260115 05:57:21.714154 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 05:57:21.714532 update_engine[1589]: I20260115 05:57:21.714489 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 15 05:57:23.167919 containerd[1599]: time="2026-01-15T05:57:23.167109935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:57:23.241173 containerd[1599]: time="2026-01-15T05:57:23.240903956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:23.245059 containerd[1599]: time="2026-01-15T05:57:23.244824818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:57:23.245059 containerd[1599]: time="2026-01-15T05:57:23.244912548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:23.245484 kubelet[2864]: E0115 05:57:23.245101 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:23.245484 kubelet[2864]: E0115 05:57:23.245158 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:23.250050 kubelet[2864]: E0115 05:57:23.248664 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:23.258134 containerd[1599]: time="2026-01-15T05:57:23.257218994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:57:23.337228 containerd[1599]: time="2026-01-15T05:57:23.337034806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:23.341686 containerd[1599]: time="2026-01-15T05:57:23.340674179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:57:23.341686 containerd[1599]: time="2026-01-15T05:57:23.340799440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:23.342188 kubelet[2864]: E0115 05:57:23.342142 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:23.342833 kubelet[2864]: E0115 05:57:23.342666 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:23.344064 kubelet[2864]: E0115 05:57:23.343920 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:23.344064 kubelet[2864]: E0115 05:57:23.343967 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:29.165194 containerd[1599]: time="2026-01-15T05:57:29.165130900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:57:29.277152 containerd[1599]: time="2026-01-15T05:57:29.277047272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:29.291935 containerd[1599]: time="2026-01-15T05:57:29.289672059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:57:29.291935 containerd[1599]: time="2026-01-15T05:57:29.289779118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:29.292157 kubelet[2864]: E0115 05:57:29.289902 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:29.292157 kubelet[2864]: E0115 05:57:29.289937 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:29.292157 kubelet[2864]: E0115 05:57:29.290213 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:29.292157 kubelet[2864]: E0115 05:57:29.290591 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:29.296825 containerd[1599]: time="2026-01-15T05:57:29.294939752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:57:29.380936 containerd[1599]: time="2026-01-15T05:57:29.380746273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:29.384831 containerd[1599]: time="2026-01-15T05:57:29.384785019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:57:29.385060 containerd[1599]: time="2026-01-15T05:57:29.384941297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:29.386587 kubelet[2864]: E0115 05:57:29.385918 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:29.386587 kubelet[2864]: E0115 05:57:29.385968 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:29.386587 kubelet[2864]: E0115 05:57:29.386091 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:29.386587 kubelet[2864]: E0115 05:57:29.386137 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:31.181151 containerd[1599]: time="2026-01-15T05:57:31.175695567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:31.259579 containerd[1599]: time="2026-01-15T05:57:31.259189952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:31.266956 containerd[1599]: time="2026-01-15T05:57:31.265229001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:31.266956 containerd[1599]: time="2026-01-15T05:57:31.266833920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:31.270408 kubelet[2864]: E0115 05:57:31.268396 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:31.270408 kubelet[2864]: E0115 05:57:31.268574 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:31.270408 kubelet[2864]: E0115 05:57:31.269110 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:31.270408 kubelet[2864]: E0115 05:57:31.269149 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:31.717816 update_engine[1589]: I20260115 05:57:31.715916 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 05:57:31.717816 update_engine[1589]: I20260115 05:57:31.716216 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 05:57:31.719805 update_engine[1589]: I20260115 05:57:31.719775 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 05:57:31.733747 update_engine[1589]: E20260115 05:57:31.733394 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 05:57:31.733747 update_engine[1589]: I20260115 05:57:31.733675 1589 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 15 05:57:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.115:22-10.0.0.1:56978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:31.990664 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:56978.service - OpenSSH per-connection server daemon (10.0.0.1:56978). Jan 15 05:57:32.032664 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 15 05:57:32.032819 kernel: audit: type=1130 audit(1768456651.989:740): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.115:22-10.0.0.1:56978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:32.171104 containerd[1599]: time="2026-01-15T05:57:32.170842380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:57:32.256724 containerd[1599]: time="2026-01-15T05:57:32.256604364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:32.266829 containerd[1599]: time="2026-01-15T05:57:32.265544244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:32.266933 containerd[1599]: time="2026-01-15T05:57:32.266829973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:57:32.270225 kubelet[2864]: E0115 05:57:32.269958 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:32.270225 kubelet[2864]: E0115 05:57:32.270060 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:32.270225 kubelet[2864]: E0115 05:57:32.270127 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:32.278772 containerd[1599]: time="2026-01-15T05:57:32.277938415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:57:32.278998 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 56978 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:57:32.277000 audit[5241]: USER_ACCT pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.289101 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:57:32.324921 kernel: audit: type=1101 audit(1768456652.277:741): pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.283000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.327514 systemd-logind[1584]: New session 9 of user core. Jan 15 05:57:32.356855 containerd[1599]: time="2026-01-15T05:57:32.356150063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:32.359204 containerd[1599]: time="2026-01-15T05:57:32.359056795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:57:32.359578 containerd[1599]: time="2026-01-15T05:57:32.359222109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:32.359986 kubelet[2864]: E0115 05:57:32.359826 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:32.359986 kubelet[2864]: E0115 05:57:32.359971 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:32.360081 kubelet[2864]: E0115 05:57:32.360040 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:32.360109 kubelet[2864]: E0115 05:57:32.360076 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:32.368905 kernel: audit: type=1103 audit(1768456652.283:742): pid=5241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.369071 kernel: audit: type=1006 audit(1768456652.283:743): pid=5241 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 05:57:32.283000 audit[5241]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b1494d0 a2=3 a3=0 items=0 ppid=1 pid=5241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:32.443895 kernel: audit: type=1300 audit(1768456652.283:743): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7b1494d0 a2=3 a3=0 items=0 ppid=1 pid=5241 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:32.283000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:32.446221 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 05:57:32.463796 kernel: audit: type=1327 audit(1768456652.283:743): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:32.462000 audit[5241]: USER_START pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.463000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.567778 kernel: audit: type=1105 audit(1768456652.462:744): pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:32.567866 kernel: audit: type=1103 audit(1768456652.463:745): pid=5251 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:33.028148 sshd[5251]: Connection closed by 10.0.0.1 port 56978 Jan 15 05:57:33.031805 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 15 05:57:33.043000 audit[5241]: USER_END pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:33.051791 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:56978.service: Deactivated successfully. Jan 15 05:57:33.059018 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 05:57:33.063096 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Jan 15 05:57:33.071026 systemd-logind[1584]: Removed session 9. Jan 15 05:57:33.102846 kernel: audit: type=1106 audit(1768456653.043:746): pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:33.103803 kernel: audit: type=1104 audit(1768456653.044:747): pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:33.044000 audit[5241]: CRED_DISP pid=5241 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:33.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.115:22-10.0.0.1:56978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:33.169705 containerd[1599]: time="2026-01-15T05:57:33.168952528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:33.258864 containerd[1599]: time="2026-01-15T05:57:33.257898087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:33.263182 containerd[1599]: time="2026-01-15T05:57:33.262764975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:33.263182 containerd[1599]: time="2026-01-15T05:57:33.262839743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:33.264080 kubelet[2864]: E0115 05:57:33.263002 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:33.264080 kubelet[2864]: E0115 05:57:33.263051 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:33.264080 kubelet[2864]: E0115 05:57:33.263115 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:33.264080 kubelet[2864]: E0115 05:57:33.263142 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:37.086804 kubelet[2864]: E0115 05:57:37.084964 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:38.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.115:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:38.064841 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:56298.service - OpenSSH per-connection server daemon (10.0.0.1:56298). Jan 15 05:57:38.078999 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:57:38.079892 kernel: audit: type=1130 audit(1768456658.063:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.115:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:38.232836 kubelet[2864]: E0115 05:57:38.232035 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:38.352000 audit[5295]: USER_ACCT pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.361735 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 56298 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:57:38.376771 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:57:38.411161 kernel: audit: type=1101 audit(1768456658.352:750): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.431164 kernel: audit: type=1103 audit(1768456658.367:751): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.367000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.487806 systemd-logind[1584]: New session 10 of user core. Jan 15 05:57:38.500578 kernel: audit: type=1006 audit(1768456658.367:752): pid=5295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 05:57:38.367000 audit[5295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6f8598d0 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:38.506824 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 05:57:38.573761 kernel: audit: type=1300 audit(1768456658.367:752): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6f8598d0 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:38.367000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:38.604726 kernel: audit: type=1327 audit(1768456658.367:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:38.604870 kernel: audit: type=1105 audit(1768456658.553:753): pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.553000 audit[5295]: USER_START pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.733175 kernel: audit: type=1103 audit(1768456658.568:754): pid=5299 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:38.568000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:39.227027 sshd[5299]: Connection closed by 10.0.0.1 port 56298 Jan 15 05:57:39.227003 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Jan 15 05:57:39.229000 audit[5295]: USER_END pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:39.243695 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:56298.service: Deactivated successfully. Jan 15 05:57:39.253964 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 05:57:39.263186 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Jan 15 05:57:39.269827 systemd-logind[1584]: Removed session 10. Jan 15 05:57:39.292697 kernel: audit: type=1106 audit(1768456659.229:755): pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:39.292824 kernel: audit: type=1104 audit(1768456659.230:756): pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:39.230000 audit[5295]: CRED_DISP pid=5295 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:39.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.115:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:41.690972 update_engine[1589]: I20260115 05:57:41.690701 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 05:57:41.690972 update_engine[1589]: I20260115 05:57:41.690806 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 05:57:41.698182 update_engine[1589]: I20260115 05:57:41.698123 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 05:57:41.725580 update_engine[1589]: E20260115 05:57:41.724637 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 05:57:41.725580 update_engine[1589]: I20260115 05:57:41.724768 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 05:57:41.725580 update_engine[1589]: I20260115 05:57:41.724787 1589 omaha_request_action.cc:617] Omaha request response: Jan 15 05:57:41.725580 update_engine[1589]: E20260115 05:57:41.724905 1589 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 15 05:57:41.725874 update_engine[1589]: I20260115 05:57:41.725845 1589 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 15 05:57:41.725952 update_engine[1589]: I20260115 05:57:41.725930 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.726012 1589 update_attempter.cc:306] Processing Done. Jan 15 05:57:41.729030 update_engine[1589]: E20260115 05:57:41.726042 1589 update_attempter.cc:619] Update failed. Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.726183 1589 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.726203 1589 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.726214 1589 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.727025 1589 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.727065 1589 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.727077 1589 omaha_request_action.cc:272] Request: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.727087 1589 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 05:57:41.729030 update_engine[1589]: I20260115 05:57:41.727118 1589 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 05:57:41.730009 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 15 05:57:41.731224 update_engine[1589]: I20260115 05:57:41.731184 1589 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 05:57:41.746061 update_engine[1589]: E20260115 05:57:41.745808 1589 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 15 05:57:41.746061 update_engine[1589]: I20260115 05:57:41.746018 1589 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 05:57:41.746061 update_engine[1589]: I20260115 05:57:41.746035 1589 omaha_request_action.cc:617] Omaha request response: Jan 15 05:57:41.746061 update_engine[1589]: I20260115 05:57:41.746047 1589 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 05:57:41.746061 update_engine[1589]: I20260115 05:57:41.746057 1589 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 05:57:41.746640 update_engine[1589]: I20260115 05:57:41.746065 1589 update_attempter.cc:306] Processing Done. Jan 15 05:57:41.746640 update_engine[1589]: I20260115 05:57:41.746075 1589 update_attempter.cc:310] Error event sent. Jan 15 05:57:41.746640 update_engine[1589]: I20260115 05:57:41.746088 1589 update_check_scheduler.cc:74] Next update check in 44m46s Jan 15 05:57:41.747770 locksmithd[1649]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 15 05:57:42.758164 kubelet[2864]: E0115 05:57:42.758109 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:42.780756 kubelet[2864]: E0115 05:57:42.780132 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:44.173633 kubelet[2864]: E0115 05:57:44.170003 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:44.178950 kubelet[2864]: E0115 05:57:44.178558 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:57:44.191023 kubelet[2864]: E0115 05:57:44.190928 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:44.359758 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:57:44.360599 kernel: audit: type=1130 audit(1768456664.289:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.115:22-10.0.0.1:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:44.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.115:22-10.0.0.1:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:44.290174 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:56310.service - OpenSSH per-connection server daemon (10.0.0.1:56310). Jan 15 05:57:44.611734 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:57:44.609000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.631957 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:57:44.662736 systemd-logind[1584]: New session 11 of user core. Jan 15 05:57:44.665684 kernel: audit: type=1101 audit(1768456664.609:759): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.665772 kernel: audit: type=1103 audit(1768456664.616:760): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.616000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.740819 kernel: audit: type=1006 audit(1768456664.616:761): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 15 05:57:44.742674 kernel: audit: type=1300 audit(1768456664.616:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc78f40a30 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:44.616000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc78f40a30 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:44.788997 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 05:57:44.616000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:44.815000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.890588 kernel: audit: type=1327 audit(1768456664.616:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:44.890698 kernel: audit: type=1105 audit(1768456664.815:762): pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.831000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:44.944657 kernel: audit: type=1103 audit(1768456664.831:763): pid=5323 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:45.607694 sshd[5323]: Connection closed by 10.0.0.1 port 56310 Jan 15 05:57:45.609999 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 15 05:57:45.614000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:45.621792 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:56310.service: Deactivated successfully. Jan 15 05:57:45.653982 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 05:57:45.660931 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Jan 15 05:57:45.665026 systemd-logind[1584]: Removed session 11. Jan 15 05:57:45.695960 kernel: audit: type=1106 audit(1768456665.614:764): pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:45.614000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:45.772050 kernel: audit: type=1104 audit(1768456665.614:765): pid=5316 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:45.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.115:22-10.0.0.1:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:46.177724 kubelet[2864]: E0115 05:57:46.176958 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:46.182563 kubelet[2864]: E0115 05:57:46.181853 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:50.172511 containerd[1599]: time="2026-01-15T05:57:50.171689286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:57:50.247626 containerd[1599]: time="2026-01-15T05:57:50.246924232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:50.255620 containerd[1599]: time="2026-01-15T05:57:50.254864963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:57:50.255620 containerd[1599]: time="2026-01-15T05:57:50.255488587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:50.256645 kubelet[2864]: E0115 05:57:50.256093 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:50.256645 kubelet[2864]: E0115 05:57:50.256623 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:57:50.257163 kubelet[2864]: E0115 05:57:50.256848 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:50.264686 containerd[1599]: time="2026-01-15T05:57:50.263543194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:57:50.340079 containerd[1599]: time="2026-01-15T05:57:50.339695192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:50.345673 containerd[1599]: time="2026-01-15T05:57:50.344773147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:57:50.347803 containerd[1599]: time="2026-01-15T05:57:50.345619311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:50.347872 kubelet[2864]: E0115 05:57:50.347085 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:50.347872 kubelet[2864]: E0115 05:57:50.347140 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:57:50.352787 kubelet[2864]: E0115 05:57:50.351632 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:50.352787 kubelet[2864]: E0115 05:57:50.351821 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:57:50.643699 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:54986.service - OpenSSH per-connection server daemon (10.0.0.1:54986). Jan 15 05:57:50.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.115:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:50.657855 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:57:50.657934 kernel: audit: type=1130 audit(1768456670.642:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.115:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:50.887181 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 54986 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:57:50.885000 audit[5339]: USER_ACCT pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:50.892182 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:57:50.913539 systemd-logind[1584]: New session 12 of user core. Jan 15 05:57:50.931604 kernel: audit: type=1101 audit(1768456670.885:768): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:50.889000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.006802 kernel: audit: type=1103 audit(1768456670.889:769): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.006946 kernel: audit: type=1006 audit(1768456670.889:770): pid=5339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 15 05:57:51.006987 kernel: audit: type=1300 audit(1768456670.889:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3dc7fd10 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:50.889000 audit[5339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3dc7fd10 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:50.889000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:51.059958 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 05:57:51.077071 kernel: audit: type=1327 audit(1768456670.889:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:51.079509 kernel: audit: type=1105 audit(1768456671.067:771): pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.067000 audit[5339]: USER_START pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.073000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.177677 kernel: audit: type=1103 audit(1768456671.073:772): pid=5343 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.184064 kubelet[2864]: E0115 05:57:51.183684 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:51.460921 sshd[5343]: Connection closed by 10.0.0.1 port 54986 Jan 15 05:57:51.461809 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Jan 15 05:57:51.467000 audit[5339]: USER_END pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.476796 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:54986.service: Deactivated successfully. Jan 15 05:57:51.487798 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 05:57:51.493845 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Jan 15 05:57:51.504608 systemd-logind[1584]: Removed session 12. Jan 15 05:57:51.533606 kernel: audit: type=1106 audit(1768456671.467:773): pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.468000 audit[5339]: CRED_DISP pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.575851 kernel: audit: type=1104 audit(1768456671.468:774): pid=5339 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:51.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.115:22-10.0.0.1:54986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:55.171685 containerd[1599]: time="2026-01-15T05:57:55.170076547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:57:55.251755 containerd[1599]: time="2026-01-15T05:57:55.251533039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:55.255832 containerd[1599]: time="2026-01-15T05:57:55.255755880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:57:55.255935 containerd[1599]: time="2026-01-15T05:57:55.255866504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:55.256566 kubelet[2864]: E0115 05:57:55.256051 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:55.256566 kubelet[2864]: E0115 05:57:55.256101 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:57:55.257050 kubelet[2864]: E0115 05:57:55.256589 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:55.257050 kubelet[2864]: E0115 05:57:55.256639 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:57:56.164537 kubelet[2864]: E0115 05:57:56.164024 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:56.166076 containerd[1599]: time="2026-01-15T05:57:56.165564042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:57:56.274885 containerd[1599]: time="2026-01-15T05:57:56.274653619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:56.278066 containerd[1599]: time="2026-01-15T05:57:56.278022232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:57:56.278118 containerd[1599]: time="2026-01-15T05:57:56.278102922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:56.279453 kubelet[2864]: E0115 05:57:56.278746 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:56.279453 kubelet[2864]: E0115 05:57:56.278913 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:57:56.279453 kubelet[2864]: E0115 05:57:56.278982 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:56.279453 kubelet[2864]: E0115 05:57:56.279014 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:57:56.493858 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Jan 15 05:57:56.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.115:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:56.505818 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:57:56.505869 kernel: audit: type=1130 audit(1768456676.492:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.115:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:56.693000 audit[5364]: USER_ACCT pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.696134 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:57:56.703806 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:57:56.726128 systemd-logind[1584]: New session 13 of user core. Jan 15 05:57:56.697000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.791487 kernel: audit: type=1101 audit(1768456676.693:777): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.791587 kernel: audit: type=1103 audit(1768456676.697:778): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.791642 kernel: audit: type=1006 audit(1768456676.699:779): pid=5364 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 15 05:57:56.699000 audit[5364]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9b369fd0 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:56.699000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:56.886827 kernel: audit: type=1300 audit(1768456676.699:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9b369fd0 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:57:56.886905 kernel: audit: type=1327 audit(1768456676.699:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:57:56.889578 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 05:57:56.899000 audit[5364]: USER_START pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.900000 audit[5368]: CRED_ACQ pid=5368 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.991479 kernel: audit: type=1105 audit(1768456676.899:780): pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:56.991568 kernel: audit: type=1103 audit(1768456676.900:781): pid=5368 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:57.175105 containerd[1599]: time="2026-01-15T05:57:57.173927778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:57.223751 sshd[5368]: Connection closed by 10.0.0.1 port 33472 Jan 15 05:57:57.225497 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Jan 15 05:57:57.230000 audit[5364]: USER_END pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:57.236074 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:33472.service: Deactivated successfully. Jan 15 05:57:57.236876 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Jan 15 05:57:57.243057 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 05:57:57.253655 systemd-logind[1584]: Removed session 13. Jan 15 05:57:57.230000 audit[5364]: CRED_DISP pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:57.291885 containerd[1599]: time="2026-01-15T05:57:57.291614504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:57.294929 containerd[1599]: time="2026-01-15T05:57:57.294825074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:57.294929 containerd[1599]: time="2026-01-15T05:57:57.294895825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:57.302625 kubelet[2864]: E0115 05:57:57.302575 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:57.302625 kubelet[2864]: E0115 05:57:57.302618 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:57.314653 kubelet[2864]: E0115 05:57:57.307798 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:57.314653 kubelet[2864]: E0115 05:57:57.307851 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:57:57.314868 containerd[1599]: time="2026-01-15T05:57:57.314844873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:57:57.320546 kernel: audit: type=1106 audit(1768456677.230:782): pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:57.320681 kernel: audit: type=1104 audit(1768456677.230:783): pid=5364 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:57:57.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.115:22-10.0.0.1:33472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:57:57.388077 containerd[1599]: time="2026-01-15T05:57:57.387744023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:57.391854 containerd[1599]: time="2026-01-15T05:57:57.391595910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:57:57.391854 containerd[1599]: time="2026-01-15T05:57:57.391704211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:57.396030 kubelet[2864]: E0115 05:57:57.395833 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:57.396030 kubelet[2864]: E0115 05:57:57.395876 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:57:57.396030 kubelet[2864]: E0115 05:57:57.395942 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:57.408927 containerd[1599]: time="2026-01-15T05:57:57.408886356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:57:57.496828 containerd[1599]: time="2026-01-15T05:57:57.495931018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:57.502599 containerd[1599]: time="2026-01-15T05:57:57.501813220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:57:57.502599 containerd[1599]: time="2026-01-15T05:57:57.501909618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:57.504685 kubelet[2864]: E0115 05:57:57.503763 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:57.504685 kubelet[2864]: E0115 05:57:57.503916 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:57:57.504685 kubelet[2864]: E0115 05:57:57.503993 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:57.504685 kubelet[2864]: E0115 05:57:57.504033 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:57:58.171015 containerd[1599]: time="2026-01-15T05:57:58.169783086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:57:58.177102 kubelet[2864]: E0115 05:57:58.174030 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:57:58.264036 containerd[1599]: time="2026-01-15T05:57:58.263776274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:57:58.268557 containerd[1599]: time="2026-01-15T05:57:58.266964284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:57:58.268557 containerd[1599]: time="2026-01-15T05:57:58.267543696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:57:58.268946 kubelet[2864]: E0115 05:57:58.267921 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:58.268946 kubelet[2864]: E0115 05:57:58.267970 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:57:58.268946 kubelet[2864]: E0115 05:57:58.268051 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:57:58.268946 kubelet[2864]: E0115 05:57:58.268092 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:58:01.176538 kubelet[2864]: E0115 05:58:01.172927 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:58:02.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.115:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:02.242940 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:33486.service - OpenSSH per-connection server daemon (10.0.0.1:33486). Jan 15 05:58:02.252500 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:02.252557 kernel: audit: type=1130 audit(1768456682.241:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.115:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:02.401000 audit[5384]: USER_ACCT pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.403570 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:02.408938 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:02.426069 systemd-logind[1584]: New session 14 of user core. Jan 15 05:58:02.406000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.488589 kernel: audit: type=1101 audit(1768456682.401:786): pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.488670 kernel: audit: type=1103 audit(1768456682.406:787): pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.514581 kernel: audit: type=1006 audit(1768456682.407:788): pid=5384 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 15 05:58:02.407000 audit[5384]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe27bb7450 a2=3 a3=0 items=0 ppid=1 pid=5384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:02.516039 kernel: audit: type=1300 audit(1768456682.407:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe27bb7450 a2=3 a3=0 items=0 ppid=1 pid=5384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:02.559630 kernel: audit: type=1327 audit(1768456682.407:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:02.407000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:02.558659 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 05:58:02.579000 audit[5384]: USER_START pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.636797 kernel: audit: type=1105 audit(1768456682.579:789): pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.636933 kernel: audit: type=1103 audit(1768456682.587:790): pid=5388 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.587000 audit[5388]: CRED_ACQ pid=5388 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.960440 sshd[5388]: Connection closed by 10.0.0.1 port 33486 Jan 15 05:58:02.960678 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:02.966000 audit[5384]: USER_END pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.972070 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:33486.service: Deactivated successfully. Jan 15 05:58:02.978610 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 05:58:02.984613 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Jan 15 05:58:02.988223 systemd-logind[1584]: Removed session 14. Jan 15 05:58:02.966000 audit[5384]: CRED_DISP pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:03.054943 kernel: audit: type=1106 audit(1768456682.966:791): pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:03.055002 kernel: audit: type=1104 audit(1768456682.966:792): pid=5384 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:02.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.115:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:06.170494 kubelet[2864]: E0115 05:58:06.169970 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:58:07.178593 kubelet[2864]: E0115 05:58:07.167226 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:58:07.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.115:22-10.0.0.1:51746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:07.982032 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:51746.service - OpenSSH per-connection server daemon (10.0.0.1:51746). Jan 15 05:58:07.994511 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:07.994615 kernel: audit: type=1130 audit(1768456687.982:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.115:22-10.0.0.1:51746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:08.180214 kubelet[2864]: E0115 05:58:08.179729 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:58:08.258000 audit[5432]: USER_ACCT pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.261496 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 51746 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:08.277840 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:08.300485 systemd-logind[1584]: New session 15 of user core. Jan 15 05:58:08.273000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.362628 kernel: audit: type=1101 audit(1768456688.258:795): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.363633 kernel: audit: type=1103 audit(1768456688.273:796): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.364746 kernel: audit: type=1006 audit(1768456688.273:797): pid=5432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 05:58:08.273000 audit[5432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad981850 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:08.449695 kernel: audit: type=1300 audit(1768456688.273:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad981850 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:08.449875 kernel: audit: type=1327 audit(1768456688.273:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:08.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:08.471912 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 05:58:08.481000 audit[5432]: USER_START pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.540463 kernel: audit: type=1105 audit(1768456688.481:798): pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.540669 kernel: audit: type=1103 audit(1768456688.487:799): pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.487000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.902961 sshd[5436]: Connection closed by 10.0.0.1 port 51746 Jan 15 05:58:08.903643 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:08.911000 audit[5432]: USER_END pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.920686 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:51746.service: Deactivated successfully. Jan 15 05:58:08.932758 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 05:58:08.935733 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Jan 15 05:58:08.939759 systemd-logind[1584]: Removed session 15. Jan 15 05:58:08.971556 kernel: audit: type=1106 audit(1768456688.911:800): pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.971629 kernel: audit: type=1104 audit(1768456688.912:801): pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.912000 audit[5432]: CRED_DISP pid=5432 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:08.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.115:22-10.0.0.1:51746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:10.181897 kubelet[2864]: E0115 05:58:10.180908 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:58:12.168886 kubelet[2864]: E0115 05:58:12.168822 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:58:13.175466 kubelet[2864]: E0115 05:58:13.174966 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:58:13.939006 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:51752.service - OpenSSH per-connection server daemon (10.0.0.1:51752). Jan 15 05:58:13.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.115:22-10.0.0.1:51752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:13.950783 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:13.951194 kernel: audit: type=1130 audit(1768456693.939:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.115:22-10.0.0.1:51752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:14.147000 audit[5452]: USER_ACCT pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.185493 systemd-logind[1584]: New session 16 of user core. Jan 15 05:58:14.189763 kernel: audit: type=1101 audit(1768456694.147:804): pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.152636 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:14.150000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.190752 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 51752 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:14.257180 kernel: audit: type=1103 audit(1768456694.150:805): pid=5452 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.259505 kernel: audit: type=1006 audit(1768456694.150:806): pid=5452 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 15 05:58:14.259548 kernel: audit: type=1300 audit(1768456694.150:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2b735f20 a2=3 a3=0 items=0 ppid=1 pid=5452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:14.150000 audit[5452]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2b735f20 a2=3 a3=0 items=0 ppid=1 pid=5452 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:14.258563 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 05:58:14.150000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:14.344447 kernel: audit: type=1327 audit(1768456694.150:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:14.282000 audit[5452]: USER_START pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.415490 kernel: audit: type=1105 audit(1768456694.282:807): pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.317000 audit[5462]: CRED_ACQ pid=5462 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.460454 kernel: audit: type=1103 audit(1768456694.317:808): pid=5462 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.742028 sshd[5462]: Connection closed by 10.0.0.1 port 51752 Jan 15 05:58:14.743659 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:14.750000 audit[5452]: USER_END pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.757847 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:51752.service: Deactivated successfully. Jan 15 05:58:14.766018 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 05:58:14.775489 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Jan 15 05:58:14.780642 systemd-logind[1584]: Removed session 16. Jan 15 05:58:14.751000 audit[5452]: CRED_DISP pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.837198 kernel: audit: type=1106 audit(1768456694.750:809): pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.837585 kernel: audit: type=1104 audit(1768456694.751:810): pid=5452 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:14.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.115:22-10.0.0.1:51752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:19.173488 kubelet[2864]: E0115 05:58:19.172585 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:58:19.763625 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:59148.service - OpenSSH per-connection server daemon (10.0.0.1:59148). Jan 15 05:58:19.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.115:22-10.0.0.1:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:19.772664 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:19.772752 kernel: audit: type=1130 audit(1768456699.763:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.115:22-10.0.0.1:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:19.869000 audit[5479]: USER_ACCT pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.870573 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 59148 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:19.872957 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:19.884146 systemd-logind[1584]: New session 17 of user core. Jan 15 05:58:19.888415 kernel: audit: type=1101 audit(1768456699.869:813): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.871000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.920967 kernel: audit: type=1103 audit(1768456699.871:814): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.921084 kernel: audit: type=1006 audit(1768456699.871:815): pid=5479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 15 05:58:19.921129 kernel: audit: type=1300 audit(1768456699.871:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff90b274e0 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:19.871000 audit[5479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff90b274e0 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:19.871000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:19.937571 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 05:58:19.942398 kernel: audit: type=1327 audit(1768456699.871:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:19.942456 kernel: audit: type=1105 audit(1768456699.941:816): pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.941000 audit[5479]: USER_START pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.944000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:19.978794 kernel: audit: type=1103 audit(1768456699.944:817): pid=5483 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.137965 sshd[5483]: Connection closed by 10.0.0.1 port 59148 Jan 15 05:58:20.140558 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:20.143000 audit[5479]: USER_END pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.151504 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:59148.service: Deactivated successfully. Jan 15 05:58:20.155645 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 05:58:20.157751 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Jan 15 05:58:20.163420 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:59160.service - OpenSSH per-connection server daemon (10.0.0.1:59160). Jan 15 05:58:20.166914 systemd-logind[1584]: Removed session 17. Jan 15 05:58:20.143000 audit[5479]: CRED_DISP pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.191203 kernel: audit: type=1106 audit(1768456700.143:818): pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.191389 kernel: audit: type=1104 audit(1768456700.143:819): pid=5479 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.115:22-10.0.0.1:59148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:20.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.115:22-10.0.0.1:59160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:20.252717 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 59160 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:20.251000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.254000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.254000 audit[5498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc87f09680 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:20.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:20.256627 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:20.268591 systemd-logind[1584]: New session 18 of user core. Jan 15 05:58:20.283675 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 05:58:20.288000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.291000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.466860 sshd[5503]: Connection closed by 10.0.0.1 port 59160 Jan 15 05:58:20.468103 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:20.473000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.474000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.487172 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:59160.service: Deactivated successfully. Jan 15 05:58:20.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.115:22-10.0.0.1:59160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:20.495112 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 05:58:20.497353 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Jan 15 05:58:20.502347 systemd-logind[1584]: Removed session 18. Jan 15 05:58:20.505552 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:59168.service - OpenSSH per-connection server daemon (10.0.0.1:59168). Jan 15 05:58:20.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.115:22-10.0.0.1:59168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:20.600000 audit[5515]: USER_ACCT pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.600979 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 59168 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:20.601000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.602000 audit[5515]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff42b26670 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:20.602000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:20.603966 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:20.617927 systemd-logind[1584]: New session 19 of user core. Jan 15 05:58:20.628626 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 05:58:20.637000 audit[5515]: USER_START pid=5515 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.641000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.824510 sshd[5519]: Connection closed by 10.0.0.1 port 59168 Jan 15 05:58:20.825707 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:20.828000 audit[5515]: USER_END pid=5515 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.828000 audit[5515]: CRED_DISP pid=5515 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:20.833764 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:59168.service: Deactivated successfully. Jan 15 05:58:20.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.115:22-10.0.0.1:59168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:20.839005 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 05:58:20.842915 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Jan 15 05:58:20.846429 systemd-logind[1584]: Removed session 19. Jan 15 05:58:21.166442 kubelet[2864]: E0115 05:58:21.165425 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:58:21.167427 kubelet[2864]: E0115 05:58:21.166867 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:58:24.171399 kubelet[2864]: E0115 05:58:24.170771 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:58:25.166930 kubelet[2864]: E0115 05:58:25.166606 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:58:25.863925 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:40780.service - OpenSSH per-connection server daemon (10.0.0.1:40780). Jan 15 05:58:25.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.115:22-10.0.0.1:40780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:25.885418 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 05:58:25.885560 kernel: audit: type=1130 audit(1768456705.863:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.115:22-10.0.0.1:40780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:25.985000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:25.986589 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 40780 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:25.990936 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:26.003478 systemd-logind[1584]: New session 20 of user core. Jan 15 05:58:26.013534 kernel: audit: type=1101 audit(1768456705.985:840): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.013653 kernel: audit: type=1103 audit(1768456705.988:841): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:25.988000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.043974 kernel: audit: type=1006 audit(1768456705.988:842): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 15 05:58:26.044201 kernel: audit: type=1300 audit(1768456705.988:842): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefa6b1690 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:25.988000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefa6b1690 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:25.988000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:26.075633 kernel: audit: type=1327 audit(1768456705.988:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:26.081647 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 05:58:26.087000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.091000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.128539 kernel: audit: type=1105 audit(1768456706.087:843): pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.128669 kernel: audit: type=1103 audit(1768456706.091:844): pid=5537 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.246088 sshd[5537]: Connection closed by 10.0.0.1 port 40780 Jan 15 05:58:26.246669 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:26.249000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.255073 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:40780.service: Deactivated successfully. Jan 15 05:58:26.259094 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 05:58:26.260988 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Jan 15 05:58:26.263437 systemd-logind[1584]: Removed session 20. Jan 15 05:58:26.250000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.291372 kernel: audit: type=1106 audit(1768456706.249:845): pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.291436 kernel: audit: type=1104 audit(1768456706.250:846): pid=5533 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:26.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.115:22-10.0.0.1:40780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:27.167207 kubelet[2864]: E0115 05:58:27.166817 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:58:30.172091 kubelet[2864]: E0115 05:58:30.171551 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:58:31.164093 kubelet[2864]: E0115 05:58:31.163894 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:58:31.264730 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Jan 15 05:58:31.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.115:22-10.0.0.1:40782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:31.271505 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:31.271588 kernel: audit: type=1130 audit(1768456711.265:848): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.115:22-10.0.0.1:40782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:31.368000 audit[5556]: USER_ACCT pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.370175 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:31.374222 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:31.387842 systemd-logind[1584]: New session 21 of user core. Jan 15 05:58:31.400525 kernel: audit: type=1101 audit(1768456711.368:849): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.400693 kernel: audit: type=1103 audit(1768456711.370:850): pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.370000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.445387 kernel: audit: type=1006 audit(1768456711.370:851): pid=5556 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 15 05:58:31.370000 audit[5556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81c950e0 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:31.448379 kernel: audit: type=1300 audit(1768456711.370:851): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81c950e0 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:31.448755 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 05:58:31.370000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:31.488498 kernel: audit: type=1327 audit(1768456711.370:851): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:31.459000 audit[5556]: USER_START pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.521729 kernel: audit: type=1105 audit(1768456711.459:852): pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.465000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.549473 kernel: audit: type=1103 audit(1768456711.465:853): pid=5560 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.714861 sshd[5560]: Connection closed by 10.0.0.1 port 40782 Jan 15 05:58:31.716587 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:31.719000 audit[5556]: USER_END pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.726887 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:40782.service: Deactivated successfully. Jan 15 05:58:31.730737 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 05:58:31.733701 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Jan 15 05:58:31.736604 systemd-logind[1584]: Removed session 21. Jan 15 05:58:31.720000 audit[5556]: CRED_DISP pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.765451 kernel: audit: type=1106 audit(1768456711.719:854): pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.765599 kernel: audit: type=1104 audit(1768456711.720:855): pid=5556 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:31.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.115:22-10.0.0.1:40782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:33.169650 kubelet[2864]: E0115 05:58:33.167637 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:58:35.164525 kubelet[2864]: E0115 05:58:35.164060 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:58:35.167395 kubelet[2864]: E0115 05:58:35.164838 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:58:35.167395 kubelet[2864]: E0115 05:58:35.166147 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:58:36.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.115:22-10.0.0.1:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:36.741912 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:49454.service - OpenSSH per-connection server daemon (10.0.0.1:49454). Jan 15 05:58:36.748377 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:36.748497 kernel: audit: type=1130 audit(1768456716.740:857): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.115:22-10.0.0.1:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:36.859000 audit[5608]: USER_ACCT pid=5608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.861071 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 49454 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:36.863703 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:36.874225 systemd-logind[1584]: New session 22 of user core. Jan 15 05:58:36.860000 audit[5608]: CRED_ACQ pid=5608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.899473 kernel: audit: type=1101 audit(1768456716.859:858): pid=5608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.899532 kernel: audit: type=1103 audit(1768456716.860:859): pid=5608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.899565 kernel: audit: type=1006 audit(1768456716.861:860): pid=5608 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 15 05:58:36.861000 audit[5608]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff79fe5c90 a2=3 a3=0 items=0 ppid=1 pid=5608 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:36.932450 kernel: audit: type=1300 audit(1768456716.861:860): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff79fe5c90 a2=3 a3=0 items=0 ppid=1 pid=5608 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:36.932697 kernel: audit: type=1327 audit(1768456716.861:860): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:36.861000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:36.942070 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 05:58:36.946000 audit[5608]: USER_START pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.981824 kernel: audit: type=1105 audit(1768456716.946:861): pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.981960 kernel: audit: type=1103 audit(1768456716.950:862): pid=5612 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:36.950000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:37.137588 sshd[5612]: Connection closed by 10.0.0.1 port 49454 Jan 15 05:58:37.138125 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:37.140000 audit[5608]: USER_END pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:37.145214 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:49454.service: Deactivated successfully. Jan 15 05:58:37.149572 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 05:58:37.154151 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Jan 15 05:58:37.156911 systemd-logind[1584]: Removed session 22. Jan 15 05:58:37.140000 audit[5608]: CRED_DISP pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:37.188441 kernel: audit: type=1106 audit(1768456717.140:863): pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:37.188600 kernel: audit: type=1104 audit(1768456717.140:864): pid=5608 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:37.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.115:22-10.0.0.1:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:38.167765 containerd[1599]: time="2026-01-15T05:58:38.167184392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:58:38.271327 containerd[1599]: time="2026-01-15T05:58:38.270933630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:38.273610 containerd[1599]: time="2026-01-15T05:58:38.273459872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:58:38.273610 containerd[1599]: time="2026-01-15T05:58:38.273575667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:38.273775 kubelet[2864]: E0115 05:58:38.273740 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:58:38.274552 kubelet[2864]: E0115 05:58:38.273785 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:58:38.274552 kubelet[2864]: E0115 05:58:38.273859 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:38.277398 containerd[1599]: time="2026-01-15T05:58:38.276073507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:58:38.346364 containerd[1599]: time="2026-01-15T05:58:38.345706276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:38.348921 containerd[1599]: time="2026-01-15T05:58:38.348754418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:58:38.349091 containerd[1599]: time="2026-01-15T05:58:38.348955771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:38.351396 kubelet[2864]: E0115 05:58:38.350431 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:58:38.351396 kubelet[2864]: E0115 05:58:38.350731 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:58:38.351396 kubelet[2864]: E0115 05:58:38.350928 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-glvpn_calico-system(94de96e0-d8e2-4380-a60f-000b8e6b1786): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:38.351396 kubelet[2864]: E0115 05:58:38.351078 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:58:41.171067 containerd[1599]: time="2026-01-15T05:58:41.170642939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:58:41.244373 containerd[1599]: time="2026-01-15T05:58:41.242447684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:41.246727 containerd[1599]: time="2026-01-15T05:58:41.246590760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:58:41.246727 containerd[1599]: time="2026-01-15T05:58:41.246702798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:41.248902 kubelet[2864]: E0115 05:58:41.247411 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:58:41.248902 kubelet[2864]: E0115 05:58:41.247481 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:58:41.248902 kubelet[2864]: E0115 05:58:41.248153 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:41.256034 containerd[1599]: time="2026-01-15T05:58:41.255457716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:58:41.321815 containerd[1599]: time="2026-01-15T05:58:41.321673830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:41.323869 containerd[1599]: time="2026-01-15T05:58:41.323706763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:58:41.324489 containerd[1599]: time="2026-01-15T05:58:41.324075948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:41.325820 kubelet[2864]: E0115 05:58:41.325682 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:58:41.325820 kubelet[2864]: E0115 05:58:41.325806 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:58:41.325931 kubelet[2864]: E0115 05:58:41.325884 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f7495bcf-nsnsl_calico-system(4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:41.326042 kubelet[2864]: E0115 05:58:41.325927 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:58:42.155595 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:49458.service - OpenSSH per-connection server daemon (10.0.0.1:49458). Jan 15 05:58:42.163454 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:58:42.163706 kernel: audit: type=1130 audit(1768456722.154:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.115:22-10.0.0.1:49458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:42.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.115:22-10.0.0.1:49458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:42.276000 audit[5629]: USER_ACCT pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.280664 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 49458 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:42.282832 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:42.294522 systemd-logind[1584]: New session 23 of user core. Jan 15 05:58:42.279000 audit[5629]: CRED_ACQ pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.334514 kernel: audit: type=1101 audit(1768456722.276:867): pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.334611 kernel: audit: type=1103 audit(1768456722.279:868): pid=5629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.334659 kernel: audit: type=1006 audit(1768456722.279:869): pid=5629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 15 05:58:42.346702 kernel: audit: type=1300 audit(1768456722.279:869): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe78761380 a2=3 a3=0 items=0 ppid=1 pid=5629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:42.279000 audit[5629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe78761380 a2=3 a3=0 items=0 ppid=1 pid=5629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:42.279000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:42.378878 kernel: audit: type=1327 audit(1768456722.279:869): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:42.381116 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 05:58:42.386000 audit[5629]: USER_START pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.386000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.439495 kernel: audit: type=1105 audit(1768456722.386:870): pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.439602 kernel: audit: type=1103 audit(1768456722.386:871): pid=5633 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.565442 sshd[5633]: Connection closed by 10.0.0.1 port 49458 Jan 15 05:58:42.566367 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:42.568000 audit[5629]: USER_END pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.604432 kernel: audit: type=1106 audit(1768456722.568:872): pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.604575 kernel: audit: type=1104 audit(1768456722.568:873): pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.568000 audit[5629]: CRED_DISP pid=5629 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.633796 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:49458.service: Deactivated successfully. Jan 15 05:58:42.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.115:22-10.0.0.1:49458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:42.638701 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 05:58:42.640858 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Jan 15 05:58:42.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.115:22-10.0.0.1:49470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:42.649107 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Jan 15 05:58:42.650734 systemd-logind[1584]: Removed session 23. Jan 15 05:58:42.739000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.741033 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:42.742000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.742000 audit[5646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc40073e10 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:42.742000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:42.745604 sshd-session[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:42.757648 systemd-logind[1584]: New session 24 of user core. Jan 15 05:58:42.773924 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 05:58:42.779000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:42.783000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.169123 containerd[1599]: time="2026-01-15T05:58:43.168807885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:58:43.248921 containerd[1599]: time="2026-01-15T05:58:43.247575214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:43.253575 containerd[1599]: time="2026-01-15T05:58:43.253528787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:58:43.254058 containerd[1599]: time="2026-01-15T05:58:43.253743355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:43.254947 kubelet[2864]: E0115 05:58:43.254787 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:58:43.254947 kubelet[2864]: E0115 05:58:43.254923 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:58:43.255668 kubelet[2864]: E0115 05:58:43.255120 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-98d64bddf-vgrjr_calico-system(9f5c6d0a-fde4-4893-b36a-da65165e8843): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:43.255668 kubelet[2864]: E0115 05:58:43.255168 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:58:43.397381 sshd[5651]: Connection closed by 10.0.0.1 port 49470 Jan 15 05:58:43.396453 sshd-session[5646]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:43.408000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.408000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.415165 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:49472.service - OpenSSH per-connection server daemon (10.0.0.1:49472). Jan 15 05:58:43.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.115:22-10.0.0.1:49472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:43.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.115:22-10.0.0.1:49470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:43.424085 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:49470.service: Deactivated successfully. Jan 15 05:58:43.439371 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 05:58:43.445691 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Jan 15 05:58:43.448851 systemd-logind[1584]: Removed session 24. Jan 15 05:58:43.597000 audit[5660]: USER_ACCT pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.599446 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 49472 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:43.600000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.600000 audit[5660]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2f711880 a2=3 a3=0 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:43.600000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:43.603641 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:43.617408 systemd-logind[1584]: New session 25 of user core. Jan 15 05:58:43.629765 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 05:58:43.636000 audit[5660]: USER_START pid=5660 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:43.640000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:44.795584 sshd[5667]: Connection closed by 10.0.0.1 port 49472 Jan 15 05:58:44.795130 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:44.797000 audit[5660]: USER_END pid=5660 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:44.797000 audit[5660]: CRED_DISP pid=5660 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:44.800000 audit[5693]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:44.800000 audit[5693]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffde1559e00 a2=0 a3=7ffde1559dec items=0 ppid=3019 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:44.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:44.809000 audit[5693]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:44.809000 audit[5693]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffde1559e00 a2=0 a3=0 items=0 ppid=3019 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:44.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:44.815662 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:49472.service: Deactivated successfully. Jan 15 05:58:44.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.115:22-10.0.0.1:49472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:44.824929 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 05:58:44.830929 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Jan 15 05:58:44.842794 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:59524.service - OpenSSH per-connection server daemon (10.0.0.1:59524). Jan 15 05:58:44.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.115:22-10.0.0.1:59524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:44.848506 systemd-logind[1584]: Removed session 25. Jan 15 05:58:44.982000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:44.984683 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 59524 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:44.985000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:44.985000 audit[5698]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca0127cc0 a2=3 a3=0 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:44.985000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:44.989169 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:45.001895 systemd-logind[1584]: New session 26 of user core. Jan 15 05:58:45.014223 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 05:58:45.020000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.024000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.169389 containerd[1599]: time="2026-01-15T05:58:45.168516777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:58:45.234592 containerd[1599]: time="2026-01-15T05:58:45.234455334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:45.237603 containerd[1599]: time="2026-01-15T05:58:45.237175655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:58:45.237742 containerd[1599]: time="2026-01-15T05:58:45.237651578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:45.238557 kubelet[2864]: E0115 05:58:45.238104 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:58:45.239622 kubelet[2864]: E0115 05:58:45.238703 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:58:45.240453 kubelet[2864]: E0115 05:58:45.240038 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-h9kdb_calico-apiserver(759e03fd-9efa-4510-b2ed-62c16a4c2e13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:45.240453 kubelet[2864]: E0115 05:58:45.240176 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:58:45.470802 sshd[5702]: Connection closed by 10.0.0.1 port 59524 Jan 15 05:58:45.472906 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:45.475000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.476000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.115:22-10.0.0.1:59524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:45.494640 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:59524.service: Deactivated successfully. Jan 15 05:58:45.504042 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 05:58:45.510645 systemd-logind[1584]: Session 26 logged out. Waiting for processes to exit. Jan 15 05:58:45.519173 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:59530.service - OpenSSH per-connection server daemon (10.0.0.1:59530). Jan 15 05:58:45.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.115:22-10.0.0.1:59530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:45.522210 systemd-logind[1584]: Removed session 26. Jan 15 05:58:45.668000 audit[5714]: USER_ACCT pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.670159 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 59530 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:45.670000 audit[5714]: CRED_ACQ pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.671000 audit[5714]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe02bc2590 a2=3 a3=0 items=0 ppid=1 pid=5714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:45.671000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:45.674542 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:45.689539 systemd-logind[1584]: New session 27 of user core. Jan 15 05:58:45.701871 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 05:58:45.707000 audit[5714]: USER_START pid=5714 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.713000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.875000 audit[5729]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:45.875000 audit[5729]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd878f4810 a2=0 a3=7ffd878f47fc items=0 ppid=3019 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:45.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:45.885000 audit[5729]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:45.885000 audit[5729]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd878f4810 a2=0 a3=0 items=0 ppid=3019 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:45.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:45.911841 sshd[5718]: Connection closed by 10.0.0.1 port 59530 Jan 15 05:58:45.913899 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:45.918000 audit[5714]: USER_END pid=5714 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.918000 audit[5714]: CRED_DISP pid=5714 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:45.925909 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:59530.service: Deactivated successfully. Jan 15 05:58:45.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.115:22-10.0.0.1:59530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:45.931805 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 05:58:45.937211 systemd-logind[1584]: Session 27 logged out. Waiting for processes to exit. Jan 15 05:58:45.940585 systemd-logind[1584]: Removed session 27. Jan 15 05:58:46.575216 systemd[1715]: Created slice background.slice - User Background Tasks Slice. Jan 15 05:58:46.580891 systemd[1715]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 15 05:58:46.640517 systemd[1715]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 15 05:58:48.169182 containerd[1599]: time="2026-01-15T05:58:48.168900753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:58:48.243350 containerd[1599]: time="2026-01-15T05:58:48.243119071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:48.246400 containerd[1599]: time="2026-01-15T05:58:48.246164277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:58:48.246400 containerd[1599]: time="2026-01-15T05:58:48.246373425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:48.246896 kubelet[2864]: E0115 05:58:48.246531 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:58:48.246896 kubelet[2864]: E0115 05:58:48.246577 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:58:48.246896 kubelet[2864]: E0115 05:58:48.246646 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ffcfc74f7-b2c68_calico-apiserver(b9ae406d-9e12-445c-a7c0-69e8063e9379): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:48.246896 kubelet[2864]: E0115 05:58:48.246677 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:58:49.169514 kubelet[2864]: E0115 05:58:49.169195 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786" Jan 15 05:58:50.171857 containerd[1599]: time="2026-01-15T05:58:50.171554128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:58:50.235424 containerd[1599]: time="2026-01-15T05:58:50.235143017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:58:50.239882 containerd[1599]: time="2026-01-15T05:58:50.239854159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:58:50.240445 containerd[1599]: time="2026-01-15T05:58:50.240065712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:58:50.241840 kubelet[2864]: E0115 05:58:50.241711 2864 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:58:50.241840 kubelet[2864]: E0115 05:58:50.241816 2864 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:58:50.242438 kubelet[2864]: E0115 05:58:50.241877 2864 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-nxntl_calico-system(125448ce-e54b-4cc3-923a-6bb87264173b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:58:50.242438 kubelet[2864]: E0115 05:58:50.241908 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:58:50.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.115:22-10.0.0.1:59532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:50.936737 systemd[1]: Started sshd@26-10.0.0.115:22-10.0.0.1:59532.service - OpenSSH per-connection server daemon (10.0.0.1:59532). Jan 15 05:58:50.956508 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 15 05:58:50.956631 kernel: audit: type=1130 audit(1768456730.935:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.115:22-10.0.0.1:59532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:51.077000 audit[5743]: USER_ACCT pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.078872 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 59532 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:51.082781 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:51.096897 systemd-logind[1584]: New session 28 of user core. Jan 15 05:58:51.121870 kernel: audit: type=1101 audit(1768456731.077:916): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.122052 kernel: audit: type=1103 audit(1768456731.078:917): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.078000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.078000 audit[5743]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd33613a50 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:51.180323 kernel: audit: type=1006 audit(1768456731.078:918): pid=5743 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 15 05:58:51.180419 kernel: audit: type=1300 audit(1768456731.078:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd33613a50 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:51.078000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:51.190288 kernel: audit: type=1327 audit(1768456731.078:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:51.191923 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 05:58:51.196000 audit[5743]: USER_START pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.201000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.265570 kernel: audit: type=1105 audit(1768456731.196:919): pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.265645 kernel: audit: type=1103 audit(1768456731.201:920): pid=5747 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.376669 sshd[5747]: Connection closed by 10.0.0.1 port 59532 Jan 15 05:58:51.377711 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:51.380000 audit[5743]: USER_END pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.386703 systemd[1]: sshd@26-10.0.0.115:22-10.0.0.1:59532.service: Deactivated successfully. Jan 15 05:58:51.390189 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 05:58:51.393542 systemd-logind[1584]: Session 28 logged out. Waiting for processes to exit. Jan 15 05:58:51.395877 systemd-logind[1584]: Removed session 28. Jan 15 05:58:51.380000 audit[5743]: CRED_DISP pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.435724 kernel: audit: type=1106 audit(1768456731.380:921): pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.435801 kernel: audit: type=1104 audit(1768456731.380:922): pid=5743 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:51.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.115:22-10.0.0.1:59532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:51.756000 audit[5760]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:51.756000 audit[5760]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5b3b4980 a2=0 a3=7ffc5b3b496c items=0 ppid=3019 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:51.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:51.771000 audit[5760]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:58:51.771000 audit[5760]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc5b3b4980 a2=0 a3=7ffc5b3b496c items=0 ppid=3019 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:51.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:58:52.171140 kubelet[2864]: E0115 05:58:52.169916 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:58:52.175139 kubelet[2864]: E0115 05:58:52.174712 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f7495bcf-nsnsl" podUID="4719fe8a-c2f6-4614-8c44-0ea32f2ef4cf" Jan 15 05:58:56.165432 kubelet[2864]: E0115 05:58:56.164820 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:58:56.165935 kubelet[2864]: E0115 05:58:56.165487 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-98d64bddf-vgrjr" podUID="9f5c6d0a-fde4-4893-b36a-da65165e8843" Jan 15 05:58:56.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.115:22-10.0.0.1:50784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:56.406697 systemd[1]: Started sshd@27-10.0.0.115:22-10.0.0.1:50784.service - OpenSSH per-connection server daemon (10.0.0.1:50784). Jan 15 05:58:56.417493 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 05:58:56.417557 kernel: audit: type=1130 audit(1768456736.408:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.115:22-10.0.0.1:50784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:56.563000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.566904 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 50784 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:58:56.570775 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:58:56.598594 systemd-logind[1584]: New session 29 of user core. Jan 15 05:58:56.567000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.636659 kernel: audit: type=1101 audit(1768456736.563:927): pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.636810 kernel: audit: type=1103 audit(1768456736.567:928): pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.636838 kernel: audit: type=1006 audit(1768456736.567:929): pid=5762 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 15 05:58:56.661420 kernel: audit: type=1300 audit(1768456736.567:929): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc322f7500 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:56.567000 audit[5762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc322f7500 a2=3 a3=0 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:58:56.567000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:56.701517 kernel: audit: type=1327 audit(1768456736.567:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:58:56.701693 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 15 05:58:56.708000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.754715 kernel: audit: type=1105 audit(1768456736.708:930): pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.754816 kernel: audit: type=1103 audit(1768456736.711:931): pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.711000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.956122 sshd[5766]: Connection closed by 10.0.0.1 port 50784 Jan 15 05:58:56.956737 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 15 05:58:56.960000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.966191 systemd[1]: sshd@27-10.0.0.115:22-10.0.0.1:50784.service: Deactivated successfully. Jan 15 05:58:56.972185 systemd[1]: session-29.scope: Deactivated successfully. Jan 15 05:58:56.976410 systemd-logind[1584]: Session 29 logged out. Waiting for processes to exit. Jan 15 05:58:56.983221 systemd-logind[1584]: Removed session 29. Jan 15 05:58:56.960000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:57.028639 kernel: audit: type=1106 audit(1768456736.960:932): pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:57.028691 kernel: audit: type=1104 audit(1768456736.960:933): pid=5762 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:58:56.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.115:22-10.0.0.1:50784 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:58:58.165215 kubelet[2864]: E0115 05:58:58.164722 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:58:58.167493 kubelet[2864]: E0115 05:58:58.166859 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-h9kdb" podUID="759e03fd-9efa-4510-b2ed-62c16a4c2e13" Jan 15 05:59:01.165606 kubelet[2864]: E0115 05:59:01.165529 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:59:01.984455 systemd[1]: Started sshd@28-10.0.0.115:22-10.0.0.1:50796.service - OpenSSH per-connection server daemon (10.0.0.1:50796). Jan 15 05:59:01.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.115:22-10.0.0.1:50796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:59:01.991576 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:59:01.991648 kernel: audit: type=1130 audit(1768456741.983:935): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.115:22-10.0.0.1:50796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:59:02.125616 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:59:02.123000 audit[5780]: USER_ACCT pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.133158 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:59:02.148453 systemd-logind[1584]: New session 30 of user core. Jan 15 05:59:02.162679 kernel: audit: type=1101 audit(1768456742.123:936): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.163052 kernel: audit: type=1103 audit(1768456742.128:937): pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.128000 audit[5780]: CRED_ACQ pid=5780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.217845 kernel: audit: type=1006 audit(1768456742.128:938): pid=5780 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 15 05:59:02.218130 kernel: audit: type=1300 audit(1768456742.128:938): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd3786170 a2=3 a3=0 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:59:02.128000 audit[5780]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd3786170 a2=3 a3=0 items=0 ppid=1 pid=5780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:59:02.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:59:02.272693 kernel: audit: type=1327 audit(1768456742.128:938): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:59:02.274524 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 15 05:59:02.281000 audit[5780]: USER_START pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.286000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.353124 kernel: audit: type=1105 audit(1768456742.281:939): pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.353411 kernel: audit: type=1103 audit(1768456742.286:940): pid=5784 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.490636 sshd[5784]: Connection closed by 10.0.0.1 port 50796 Jan 15 05:59:02.491138 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Jan 15 05:59:02.494000 audit[5780]: USER_END pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.502439 systemd[1]: sshd@28-10.0.0.115:22-10.0.0.1:50796.service: Deactivated successfully. Jan 15 05:59:02.503460 systemd-logind[1584]: Session 30 logged out. Waiting for processes to exit. Jan 15 05:59:02.507674 systemd[1]: session-30.scope: Deactivated successfully. Jan 15 05:59:02.514046 systemd-logind[1584]: Removed session 30. Jan 15 05:59:02.494000 audit[5780]: CRED_DISP pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.569577 kernel: audit: type=1106 audit(1768456742.494:941): pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.569760 kernel: audit: type=1104 audit(1768456742.494:942): pid=5780 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:59:02.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.115:22-10.0.0.1:50796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:59:03.167442 kubelet[2864]: E0115 05:59:03.166697 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ffcfc74f7-b2c68" podUID="b9ae406d-9e12-445c-a7c0-69e8063e9379" Jan 15 05:59:03.175633 kubelet[2864]: E0115 05:59:03.175557 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-nxntl" podUID="125448ce-e54b-4cc3-923a-6bb87264173b" Jan 15 05:59:04.178534 kubelet[2864]: E0115 05:59:04.178482 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-glvpn" podUID="94de96e0-d8e2-4380-a60f-000b8e6b1786"