Jan 14 01:10:32.270487 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:10:32.270524 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:32.270543 kernel: BIOS-provided physical RAM map: Jan 14 01:10:32.270551 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 01:10:32.270559 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 01:10:32.270566 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 01:10:32.270576 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 14 01:10:32.270587 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 14 01:10:32.270595 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:10:32.270603 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 01:10:32.270611 kernel: NX (Execute Disable) protection: active Jan 14 01:10:32.270625 kernel: APIC: Static calls initialized Jan 14 01:10:32.270633 kernel: SMBIOS 2.8 present. Jan 14 01:10:32.270641 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 14 01:10:32.270652 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:10:32.270661 kernel: Hypervisor detected: KVM Jan 14 01:10:32.270678 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 14 01:10:32.270687 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:10:32.270696 kernel: kvm-clock: using sched offset of 4623910828 cycles Jan 14 01:10:32.270710 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:10:32.270734 kernel: tsc: Detected 2294.608 MHz processor Jan 14 01:10:32.270744 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:10:32.270754 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:10:32.270770 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 14 01:10:32.270779 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 01:10:32.270789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:10:32.270798 kernel: ACPI: Early table checksum verification disabled Jan 14 01:10:32.270807 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 14 01:10:32.270817 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270827 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270867 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270884 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 14 01:10:32.270894 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270903 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270912 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270922 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:10:32.270931 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Jan 14 01:10:32.270940 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Jan 14 01:10:32.270971 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 14 01:10:32.270985 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Jan 14 01:10:32.271010 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Jan 14 01:10:32.271023 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Jan 14 01:10:32.271040 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Jan 14 01:10:32.271070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 01:10:32.271091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 14 01:10:32.271112 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Jan 14 01:10:32.271133 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Jan 14 01:10:32.271154 kernel: Zone ranges: Jan 14 01:10:32.271175 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:10:32.271202 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 14 01:10:32.271223 kernel: Normal empty Jan 14 01:10:32.271244 kernel: Device empty Jan 14 01:10:32.271265 kernel: Movable zone start for each node Jan 14 01:10:32.271286 kernel: Early memory node ranges Jan 14 01:10:32.271307 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 01:10:32.271328 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 14 01:10:32.271349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 14 01:10:32.271376 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:10:32.271397 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 01:10:32.271418 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 14 01:10:32.271442 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:10:32.271463 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:10:32.271486 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:10:32.271507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:10:32.271534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:10:32.271555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:10:32.271579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:10:32.271600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:10:32.271621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:10:32.271642 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:10:32.271677 kernel: TSC deadline timer available Jan 14 01:10:32.271704 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:10:32.271743 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:10:32.271769 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:10:32.271795 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:10:32.271816 kernel: CPU topo: Num. cores per package: 2 Jan 14 01:10:32.271849 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:10:32.271870 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:10:32.271891 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:10:32.271929 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 14 01:10:32.271956 kernel: Booting paravirtualized kernel on KVM Jan 14 01:10:32.271977 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:10:32.271999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:10:32.272020 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:10:32.272041 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:10:32.272062 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:10:32.272091 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 14 01:10:32.272113 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:32.272134 kernel: random: crng init done Jan 14 01:10:32.272155 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:10:32.272176 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:10:32.272197 kernel: Fallback order for Node 0: 0 Jan 14 01:10:32.272218 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Jan 14 01:10:32.272246 kernel: Policy zone: DMA32 Jan 14 01:10:32.272268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:10:32.272289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:10:32.272310 kernel: Kernel/User page tables isolation: enabled Jan 14 01:10:32.272331 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:10:32.272352 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:10:32.272373 kernel: Dynamic Preempt: voluntary Jan 14 01:10:32.272403 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:10:32.272426 kernel: rcu: RCU event tracing is enabled. Jan 14 01:10:32.272456 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:10:32.272472 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:10:32.272488 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:10:32.272502 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:10:32.272514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:10:32.272530 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:10:32.272553 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:32.272566 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:32.272577 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:32.272587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 14 01:10:32.272597 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:10:32.272606 kernel: Console: colour VGA+ 80x25 Jan 14 01:10:32.272616 kernel: printk: legacy console [tty0] enabled Jan 14 01:10:32.272633 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:10:32.272642 kernel: ACPI: Core revision 20240827 Jan 14 01:10:32.272653 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:10:32.272680 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:10:32.272696 kernel: x2apic enabled Jan 14 01:10:32.272706 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:10:32.272717 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:10:32.272727 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 14 01:10:32.272740 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 14 01:10:32.272757 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 01:10:32.272767 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 01:10:32.272778 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:10:32.272788 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:10:32.273196 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:10:32.273213 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 01:10:32.273224 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 01:10:32.273243 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 01:10:32.273263 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 01:10:32.273279 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:10:32.273294 kernel: active return thunk: its_return_thunk Jan 14 01:10:32.273329 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:10:32.273344 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:10:32.273359 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:10:32.273376 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:10:32.273389 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:10:32.273405 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 01:10:32.273419 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:10:32.273444 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:10:32.273459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:10:32.273475 kernel: landlock: Up and running. Jan 14 01:10:32.273491 kernel: SELinux: Initializing. Jan 14 01:10:32.273505 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:10:32.273520 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:10:32.273535 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 14 01:10:32.273561 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 14 01:10:32.273576 kernel: signal: max sigframe size: 1776 Jan 14 01:10:32.273593 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:10:32.273610 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:10:32.273624 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:10:32.273639 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:10:32.273655 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:10:32.273686 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:10:32.273701 kernel: .... node #0, CPUs: #1 Jan 14 01:10:32.273720 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:10:32.273735 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 14 01:10:32.273752 kernel: Memory: 1983292K/2096612K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 108756K reserved, 0K cma-reserved) Jan 14 01:10:32.273766 kernel: devtmpfs: initialized Jan 14 01:10:32.273782 kernel: x86/mm: Memory block size: 128MB Jan 14 01:10:32.273808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:10:32.273823 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:10:32.278106 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:10:32.278135 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:10:32.278147 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:10:32.278159 kernel: audit: type=2000 audit(1768353028.581:1): state=initialized audit_enabled=0 res=1 Jan 14 01:10:32.278170 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:10:32.278209 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:10:32.278220 kernel: cpuidle: using governor menu Jan 14 01:10:32.278231 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:10:32.278241 kernel: dca service started, version 1.12.1 Jan 14 01:10:32.278252 kernel: PCI: Using configuration type 1 for base access Jan 14 01:10:32.278263 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:10:32.278274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:10:32.278290 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:10:32.278301 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:10:32.278311 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:10:32.278322 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:10:32.278332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:10:32.278343 kernel: ACPI: Interpreter enabled Jan 14 01:10:32.278353 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:10:32.278364 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:10:32.278380 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:10:32.278391 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:10:32.278401 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 14 01:10:32.278412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:10:32.278707 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:10:32.278874 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 14 01:10:32.279030 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 14 01:10:32.279044 kernel: acpiphp: Slot [3] registered Jan 14 01:10:32.279055 kernel: acpiphp: Slot [4] registered Jan 14 01:10:32.279065 kernel: acpiphp: Slot [5] registered Jan 14 01:10:32.279076 kernel: acpiphp: Slot [6] registered Jan 14 01:10:32.279086 kernel: acpiphp: Slot [7] registered Jan 14 01:10:32.279104 kernel: acpiphp: Slot [8] registered Jan 14 01:10:32.279115 kernel: acpiphp: Slot [9] registered Jan 14 01:10:32.279125 kernel: acpiphp: Slot [10] registered Jan 14 01:10:32.279136 kernel: acpiphp: Slot [11] registered Jan 14 01:10:32.279146 kernel: acpiphp: Slot [12] registered Jan 14 01:10:32.279156 kernel: acpiphp: Slot [13] registered Jan 14 01:10:32.279167 kernel: acpiphp: Slot [14] registered Jan 14 01:10:32.279177 kernel: acpiphp: Slot [15] registered Jan 14 01:10:32.279194 kernel: acpiphp: Slot [16] registered Jan 14 01:10:32.279204 kernel: acpiphp: Slot [17] registered Jan 14 01:10:32.279215 kernel: acpiphp: Slot [18] registered Jan 14 01:10:32.279225 kernel: acpiphp: Slot [19] registered Jan 14 01:10:32.279235 kernel: acpiphp: Slot [20] registered Jan 14 01:10:32.279245 kernel: acpiphp: Slot [21] registered Jan 14 01:10:32.279256 kernel: acpiphp: Slot [22] registered Jan 14 01:10:32.279272 kernel: acpiphp: Slot [23] registered Jan 14 01:10:32.279282 kernel: acpiphp: Slot [24] registered Jan 14 01:10:32.279293 kernel: acpiphp: Slot [25] registered Jan 14 01:10:32.279303 kernel: acpiphp: Slot [26] registered Jan 14 01:10:32.279313 kernel: acpiphp: Slot [27] registered Jan 14 01:10:32.279324 kernel: acpiphp: Slot [28] registered Jan 14 01:10:32.279334 kernel: acpiphp: Slot [29] registered Jan 14 01:10:32.279344 kernel: acpiphp: Slot [30] registered Jan 14 01:10:32.279360 kernel: acpiphp: Slot [31] registered Jan 14 01:10:32.279370 kernel: PCI host bridge to bus 0000:00 Jan 14 01:10:32.279522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:10:32.279650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:10:32.279884 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:10:32.280089 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 14 01:10:32.280241 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 14 01:10:32.280367 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:10:32.280530 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:10:32.280686 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:10:32.280924 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jan 14 01:10:32.281205 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Jan 14 01:10:32.281438 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jan 14 01:10:32.281638 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jan 14 01:10:32.283924 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jan 14 01:10:32.284200 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jan 14 01:10:32.284461 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 14 01:10:32.284750 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Jan 14 01:10:32.284989 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jan 14 01:10:32.285216 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 14 01:10:32.285416 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 14 01:10:32.285677 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:10:32.297112 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jan 14 01:10:32.297383 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Jan 14 01:10:32.297628 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Jan 14 01:10:32.297777 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Jan 14 01:10:32.297950 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:10:32.298130 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:10:32.298269 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Jan 14 01:10:32.298464 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Jan 14 01:10:32.298675 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Jan 14 01:10:32.298975 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:10:32.299307 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Jan 14 01:10:32.299524 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Jan 14 01:10:32.299797 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 14 01:10:32.300087 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:10:32.300326 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Jan 14 01:10:32.300557 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Jan 14 01:10:32.301878 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 14 01:10:32.302124 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:10:32.302270 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Jan 14 01:10:32.302412 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Jan 14 01:10:32.302550 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Jan 14 01:10:32.302696 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:10:32.303925 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Jan 14 01:10:32.304190 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Jan 14 01:10:32.304411 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Jan 14 01:10:32.304583 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:10:32.304725 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Jan 14 01:10:32.306003 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 14 01:10:32.306028 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:10:32.306039 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:10:32.306051 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:10:32.306065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:10:32.306093 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 14 01:10:32.306109 kernel: iommu: Default domain type: Translated Jan 14 01:10:32.306144 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:10:32.306182 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:10:32.306193 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:10:32.306204 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 01:10:32.306214 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 14 01:10:32.306395 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 14 01:10:32.306537 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 14 01:10:32.306695 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:10:32.306717 kernel: vgaarb: loaded Jan 14 01:10:32.306729 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:10:32.306740 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:10:32.306750 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:10:32.306761 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:10:32.306773 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:10:32.306783 kernel: pnp: PnP ACPI init Jan 14 01:10:32.306803 kernel: pnp: PnP ACPI: found 4 devices Jan 14 01:10:32.306814 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:10:32.306824 kernel: NET: Registered PF_INET protocol family Jan 14 01:10:32.306846 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:10:32.307890 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 01:10:32.307902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:10:32.307913 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:10:32.307934 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 01:10:32.307945 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 01:10:32.307956 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:10:32.307967 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:10:32.307977 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:10:32.307988 kernel: NET: Registered PF_XDP protocol family Jan 14 01:10:32.308167 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:10:32.308307 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:10:32.308432 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:10:32.308556 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 14 01:10:32.308679 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 14 01:10:32.308827 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 14 01:10:32.311108 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 14 01:10:32.311163 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 14 01:10:32.311421 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 41467 usecs Jan 14 01:10:32.311451 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:10:32.311475 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 01:10:32.311498 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 14 01:10:32.311520 kernel: Initialise system trusted keyrings Jan 14 01:10:32.311543 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 01:10:32.311582 kernel: Key type asymmetric registered Jan 14 01:10:32.311605 kernel: Asymmetric key parser 'x509' registered Jan 14 01:10:32.311627 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:10:32.311655 kernel: io scheduler mq-deadline registered Jan 14 01:10:32.311681 kernel: io scheduler kyber registered Jan 14 01:10:32.311708 kernel: io scheduler bfq registered Jan 14 01:10:32.311749 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:10:32.311783 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 14 01:10:32.311794 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 14 01:10:32.311805 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 14 01:10:32.311816 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:10:32.311827 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:10:32.311860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:10:32.311871 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:10:32.311911 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:10:32.312096 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 01:10:32.312112 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:10:32.312302 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 01:10:32.312479 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T01:10:30 UTC (1768353030) Jan 14 01:10:32.312612 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 14 01:10:32.312641 kernel: intel_pstate: CPU model not supported Jan 14 01:10:32.312652 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:10:32.312662 kernel: Segment Routing with IPv6 Jan 14 01:10:32.312673 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:10:32.312685 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:10:32.312696 kernel: Key type dns_resolver registered Jan 14 01:10:32.312707 kernel: IPI shorthand broadcast: enabled Jan 14 01:10:32.312723 kernel: sched_clock: Marking stable (2295004218, 254981502)->(2613044781, -63059061) Jan 14 01:10:32.312734 kernel: registered taskstats version 1 Jan 14 01:10:32.312745 kernel: Loading compiled-in X.509 certificates Jan 14 01:10:32.312755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:10:32.312766 kernel: Demotion targets for Node 0: null Jan 14 01:10:32.312776 kernel: Key type .fscrypt registered Jan 14 01:10:32.312786 kernel: Key type fscrypt-provisioning registered Jan 14 01:10:32.312832 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:10:32.313740 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:10:32.313761 kernel: ima: No architecture policies found Jan 14 01:10:32.313772 kernel: clk: Disabling unused clocks Jan 14 01:10:32.313784 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:10:32.313795 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:10:32.313806 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:10:32.313817 kernel: Run /init as init process Jan 14 01:10:32.313856 kernel: with arguments: Jan 14 01:10:32.313868 kernel: /init Jan 14 01:10:32.313879 kernel: with environment: Jan 14 01:10:32.313890 kernel: HOME=/ Jan 14 01:10:32.313901 kernel: TERM=linux Jan 14 01:10:32.313912 kernel: SCSI subsystem initialized Jan 14 01:10:32.313923 kernel: libata version 3.00 loaded. Jan 14 01:10:32.314143 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 14 01:10:32.314315 kernel: scsi host0: ata_piix Jan 14 01:10:32.314465 kernel: scsi host1: ata_piix Jan 14 01:10:32.314481 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Jan 14 01:10:32.314492 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Jan 14 01:10:32.314514 kernel: ACPI: bus type USB registered Jan 14 01:10:32.314526 kernel: usbcore: registered new interface driver usbfs Jan 14 01:10:32.314537 kernel: usbcore: registered new interface driver hub Jan 14 01:10:32.314548 kernel: usbcore: registered new device driver usb Jan 14 01:10:32.314703 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 14 01:10:32.314939 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 14 01:10:32.315154 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 14 01:10:32.315403 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 14 01:10:32.315678 kernel: hub 1-0:1.0: USB hub found Jan 14 01:10:32.315996 kernel: hub 1-0:1.0: 2 ports detected Jan 14 01:10:32.316251 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 14 01:10:32.316489 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 14 01:10:32.316520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:10:32.316544 kernel: GPT:16515071 != 125829119 Jan 14 01:10:32.316568 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:10:32.316592 kernel: GPT:16515071 != 125829119 Jan 14 01:10:32.316615 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:10:32.316650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:10:32.316871 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 14 01:10:32.317073 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 14 01:10:32.317251 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Jan 14 01:10:32.317405 kernel: scsi host2: Virtio SCSI HBA Jan 14 01:10:32.317447 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:10:32.317466 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:10:32.317485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:10:32.317507 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:10:32.317520 kernel: raid6: avx2x4 gen() 16056 MB/s Jan 14 01:10:32.317532 kernel: raid6: avx2x2 gen() 16656 MB/s Jan 14 01:10:32.317543 kernel: raid6: avx2x1 gen() 12767 MB/s Jan 14 01:10:32.317565 kernel: raid6: using algorithm avx2x2 gen() 16656 MB/s Jan 14 01:10:32.317576 kernel: raid6: .... xor() 16740 MB/s, rmw enabled Jan 14 01:10:32.317587 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:10:32.317599 kernel: xor: automatically using best checksumming function avx Jan 14 01:10:32.317616 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:10:32.317628 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (161) Jan 14 01:10:32.317640 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:10:32.317656 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:32.317668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:10:32.317679 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:10:32.317690 kernel: loop: module loaded Jan 14 01:10:32.317701 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:10:32.317712 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:10:32.317724 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:10:32.317750 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:10:32.317763 systemd[1]: Detected virtualization kvm. Jan 14 01:10:32.317774 systemd[1]: Detected architecture x86-64. Jan 14 01:10:32.317785 systemd[1]: Running in initrd. Jan 14 01:10:32.317796 systemd[1]: No hostname configured, using default hostname. Jan 14 01:10:32.317808 systemd[1]: Hostname set to . Jan 14 01:10:32.317825 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:10:32.317860 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:10:32.317872 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:10:32.317883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:32.317895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:32.317908 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:10:32.317927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:10:32.317939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:10:32.317951 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:10:32.317962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:32.317974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:32.317986 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:10:32.318003 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:10:32.318015 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:10:32.318026 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:10:32.318038 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:10:32.318049 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:10:32.318061 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:10:32.318072 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:32.318089 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:10:32.318101 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:10:32.318113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:32.318124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:32.318136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:32.318147 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:10:32.318159 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:10:32.318176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:10:32.318188 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:10:32.318199 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:10:32.318211 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:10:32.318222 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:10:32.318234 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:10:32.318251 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:10:32.318263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:32.318275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:10:32.318286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:32.318304 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:10:32.318316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:10:32.318373 systemd-journald[299]: Collecting audit messages is enabled. Jan 14 01:10:32.318406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:10:32.318418 kernel: audit: type=1130 audit(1768353032.281:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.318430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:10:32.318442 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:10:32.318453 kernel: Bridge firewalling registered Jan 14 01:10:32.318465 systemd-journald[299]: Journal started Jan 14 01:10:32.318493 systemd-journald[299]: Runtime Journal (/run/log/journal/b1f6ce1f38d24c3298cad4c1b4a50d07) is 4.8M, max 39.1M, 34.2M free. Jan 14 01:10:32.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.306931 systemd-modules-load[300]: Inserted module 'br_netfilter' Jan 14 01:10:32.393095 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:10:32.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.399912 kernel: audit: type=1130 audit(1768353032.392:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.400381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:32.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.409873 kernel: audit: type=1130 audit(1768353032.400:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.409987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:32.417738 kernel: audit: type=1130 audit(1768353032.409:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.412548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:32.427120 kernel: audit: type=1130 audit(1768353032.417:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.423642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:10:32.431140 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:10:32.437081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:10:32.464102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:32.473419 kernel: audit: type=1130 audit(1768353032.464:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.465853 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:10:32.468094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:10:32.465000 audit: BPF prog-id=6 op=LOAD Jan 14 01:10:32.483880 kernel: audit: type=1334 audit(1768353032.465:8): prog-id=6 op=LOAD Jan 14 01:10:32.484517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:10:32.492558 kernel: audit: type=1130 audit(1768353032.484:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.485903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:32.502621 kernel: audit: type=1130 audit(1768353032.492:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.501726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:10:32.535115 dracut-cmdline[338]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:32.562541 systemd-resolved[332]: Positive Trust Anchors: Jan 14 01:10:32.562564 systemd-resolved[332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:10:32.562571 systemd-resolved[332]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:10:32.562668 systemd-resolved[332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:10:32.595069 systemd-resolved[332]: Defaulting to hostname 'linux'. Jan 14 01:10:32.596450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:10:32.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.597262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:32.693958 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:10:32.717868 kernel: iscsi: registered transport (tcp) Jan 14 01:10:32.747217 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:10:32.747297 kernel: QLogic iSCSI HBA Driver Jan 14 01:10:32.786504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:10:32.810930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:32.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.812708 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:10:32.887623 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:10:32.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.891999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:10:32.895022 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:10:32.943507 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:10:32.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.944000 audit: BPF prog-id=7 op=LOAD Jan 14 01:10:32.944000 audit: BPF prog-id=8 op=LOAD Jan 14 01:10:32.946695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:32.985387 systemd-udevd[575]: Using default interface naming scheme 'v257'. Jan 14 01:10:33.004876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:33.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.010666 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:10:33.048853 dracut-pre-trigger[650]: rd.md=0: removing MD RAID activation Jan 14 01:10:33.054445 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:10:33.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.059000 audit: BPF prog-id=9 op=LOAD Jan 14 01:10:33.061564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:10:33.105248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:10:33.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.109027 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:10:33.138077 systemd-networkd[688]: lo: Link UP Jan 14 01:10:33.139015 systemd-networkd[688]: lo: Gained carrier Jan 14 01:10:33.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.139883 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:10:33.142935 systemd[1]: Reached target network.target - Network. Jan 14 01:10:33.211039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:33.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.214387 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:10:33.342089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:10:33.381309 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:10:33.412159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:10:33.429317 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:10:33.434266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:10:33.483247 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:10:33.493326 disk-uuid[742]: Primary Header is updated. Jan 14 01:10:33.493326 disk-uuid[742]: Secondary Entries is updated. Jan 14 01:10:33.493326 disk-uuid[742]: Secondary Header is updated. Jan 14 01:10:33.507219 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:10:33.532154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:33.533397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:33.544181 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:10:33.544216 kernel: audit: type=1131 audit(1768353033.534:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.543620 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:33.548076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:33.567873 kernel: AES CTR mode by8 optimization enabled Jan 14 01:10:33.583019 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 14 01:10:33.583035 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 14 01:10:33.594467 systemd-networkd[688]: eth0: Link UP Jan 14 01:10:33.596348 systemd-networkd[688]: eth0: Gained carrier Jan 14 01:10:33.596370 systemd-networkd[688]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Jan 14 01:10:33.613206 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:33.613214 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:10:33.617167 systemd-networkd[688]: eth1: Link UP Jan 14 01:10:33.617418 systemd-networkd[688]: eth1: Gained carrier Jan 14 01:10:33.617441 systemd-networkd[688]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:33.624930 systemd-networkd[688]: eth0: DHCPv4 address 143.198.154.109/20, gateway 143.198.144.1 acquired from 169.254.169.253 Jan 14 01:10:33.639952 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.76/20 acquired from 169.254.169.253 Jan 14 01:10:33.704053 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:10:33.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.773871 kernel: audit: type=1130 audit(1768353033.763:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.773991 kernel: audit: type=1130 audit(1768353033.772:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.772190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:33.781080 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:10:33.782027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:33.783591 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:10:33.786995 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:10:33.817608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:10:33.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:33.826924 kernel: audit: type=1130 audit(1768353033.818:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.562332 disk-uuid[743]: Warning: The kernel is still using the old partition table. Jan 14 01:10:34.562332 disk-uuid[743]: The new table will be used at the next reboot or after you Jan 14 01:10:34.562332 disk-uuid[743]: run partprobe(8) or kpartx(8) Jan 14 01:10:34.562332 disk-uuid[743]: The operation has completed successfully. Jan 14 01:10:34.574794 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:10:34.576076 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:10:34.591811 kernel: audit: type=1130 audit(1768353034.576:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.591885 kernel: audit: type=1131 audit(1768353034.576:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.580079 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:10:34.645873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Jan 14 01:10:34.653271 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:34.653412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:34.660900 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:10:34.660994 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:10:34.671928 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:34.673390 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:10:34.681664 kernel: audit: type=1130 audit(1768353034.673:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.677218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:10:34.882997 systemd-networkd[688]: eth0: Gained IPv6LL Jan 14 01:10:34.917007 ignition[851]: Ignition 2.24.0 Jan 14 01:10:34.917025 ignition[851]: Stage: fetch-offline Jan 14 01:10:34.917092 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:34.936422 kernel: audit: type=1130 audit(1768353034.926:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:34.922420 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:10:34.917106 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:34.929964 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:10:34.917270 ignition[851]: parsed url from cmdline: "" Jan 14 01:10:34.917275 ignition[851]: no config URL provided Jan 14 01:10:34.917282 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:10:34.917294 ignition[851]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:10:34.917301 ignition[851]: failed to fetch config: resource requires networking Jan 14 01:10:34.917602 ignition[851]: Ignition finished successfully Jan 14 01:10:34.964647 ignition[857]: Ignition 2.24.0 Jan 14 01:10:34.964666 ignition[857]: Stage: fetch Jan 14 01:10:34.965049 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:34.965062 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:34.965179 ignition[857]: parsed url from cmdline: "" Jan 14 01:10:34.965185 ignition[857]: no config URL provided Jan 14 01:10:34.965194 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:10:34.965214 ignition[857]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:10:34.965271 ignition[857]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 14 01:10:34.985163 ignition[857]: GET result: OK Jan 14 01:10:34.985561 ignition[857]: parsing config with SHA512: 9a4af1164be782d434baa8c6628e682f3bbe7a838ec077be9c1453a41ce70f8decb21c31c234a366a1bbbb5f5d9cd925cc7eaebbdc411919a49b8d7967a32f5c Jan 14 01:10:34.999802 unknown[857]: fetched base config from "system" Jan 14 01:10:34.999822 unknown[857]: fetched base config from "system" Jan 14 01:10:35.000863 ignition[857]: fetch: fetch complete Jan 14 01:10:34.999831 unknown[857]: fetched user config from "digitalocean" Jan 14 01:10:35.000873 ignition[857]: fetch: fetch passed Jan 14 01:10:35.015095 kernel: audit: type=1130 audit(1768353035.006:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.006314 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:10:35.000962 ignition[857]: Ignition finished successfully Jan 14 01:10:35.010089 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:10:35.046678 ignition[864]: Ignition 2.24.0 Jan 14 01:10:35.046696 ignition[864]: Stage: kargs Jan 14 01:10:35.047022 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:35.047044 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:35.060167 kernel: audit: type=1130 audit(1768353035.051:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.050757 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:10:35.048631 ignition[864]: kargs: kargs passed Jan 14 01:10:35.054530 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:10:35.048701 ignition[864]: Ignition finished successfully Jan 14 01:10:35.102473 ignition[871]: Ignition 2.24.0 Jan 14 01:10:35.102492 ignition[871]: Stage: disks Jan 14 01:10:35.102864 ignition[871]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:35.102883 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:35.107647 ignition[871]: disks: disks passed Jan 14 01:10:35.107890 ignition[871]: Ignition finished successfully Jan 14 01:10:35.109830 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:10:35.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.112262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:10:35.114217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:10:35.115250 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:10:35.117244 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:10:35.118883 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:10:35.122296 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:10:35.174994 systemd-fsck[880]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 01:10:35.181243 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:10:35.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.184980 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:10:35.336873 kernel: EXT4-fs (vda9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:10:35.338396 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:10:35.340145 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:10:35.343226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:10:35.345956 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:10:35.349694 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 14 01:10:35.355738 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 01:10:35.356691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:10:35.356755 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:10:35.374373 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:10:35.388754 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:10:35.406587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Jan 14 01:10:35.406635 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:35.406694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:35.419885 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:10:35.419960 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:10:35.431503 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:10:35.514818 coreos-metadata[891]: Jan 14 01:10:35.514 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 14 01:10:35.520100 coreos-metadata[890]: Jan 14 01:10:35.519 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 14 01:10:35.523984 systemd-networkd[688]: eth1: Gained IPv6LL Jan 14 01:10:35.529353 coreos-metadata[891]: Jan 14 01:10:35.528 INFO Fetch successful Jan 14 01:10:35.536873 coreos-metadata[890]: Jan 14 01:10:35.536 INFO Fetch successful Jan 14 01:10:35.539828 coreos-metadata[891]: Jan 14 01:10:35.539 INFO wrote hostname ci-4578.0.0-p-c80f5dee3b to /sysroot/etc/hostname Jan 14 01:10:35.544180 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:10:35.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.552528 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 14 01:10:35.552697 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 14 01:10:35.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-afterburn-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.720780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:10:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.723915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:10:35.727002 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:10:35.752225 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:10:35.755091 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:35.780670 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:10:35.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.794457 ignition[994]: INFO : Ignition 2.24.0 Jan 14 01:10:35.794457 ignition[994]: INFO : Stage: mount Jan 14 01:10:35.796984 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:35.796984 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:35.796984 ignition[994]: INFO : mount: mount passed Jan 14 01:10:35.796984 ignition[994]: INFO : Ignition finished successfully Jan 14 01:10:35.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.797888 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:10:35.799869 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:10:35.826102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:10:35.863112 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1005) Jan 14 01:10:35.867547 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:35.867625 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:35.884754 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:10:35.884907 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:10:35.888525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:10:35.939025 ignition[1021]: INFO : Ignition 2.24.0 Jan 14 01:10:35.939025 ignition[1021]: INFO : Stage: files Jan 14 01:10:35.940777 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:35.940777 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:35.942881 ignition[1021]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:10:35.942881 ignition[1021]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:10:35.942881 ignition[1021]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:10:35.948477 ignition[1021]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:10:35.949734 ignition[1021]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:10:35.950939 ignition[1021]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:10:35.949980 unknown[1021]: wrote ssh authorized keys file for user: core Jan 14 01:10:35.953245 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:10:35.953245 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:10:35.989941 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:10:36.066246 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:10:36.067961 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:10:36.079670 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 14 01:10:36.496600 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:10:37.541579 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:10:37.541579 ignition[1021]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:10:37.544802 ignition[1021]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:10:37.548142 ignition[1021]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:10:37.548142 ignition[1021]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:10:37.548142 ignition[1021]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:10:37.551909 ignition[1021]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:10:37.551909 ignition[1021]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:10:37.551909 ignition[1021]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:10:37.551909 ignition[1021]: INFO : files: files passed Jan 14 01:10:37.551909 ignition[1021]: INFO : Ignition finished successfully Jan 14 01:10:37.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.551759 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:10:37.555016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:10:37.560346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:10:37.585135 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:10:37.585294 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:10:37.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.600267 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:37.600267 initrd-setup-root-after-ignition[1054]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:37.603859 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:37.604146 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:10:37.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.607273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:10:37.609937 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:10:37.679179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:10:37.679360 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:10:37.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.681463 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:10:37.683046 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:10:37.685096 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:10:37.686548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:10:37.738770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:10:37.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.742237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:10:37.777803 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:10:37.778135 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:37.780266 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:37.782319 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:10:37.784185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:10:37.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.784438 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:10:37.786535 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:10:37.787818 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:10:37.789518 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:10:37.791087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:10:37.792745 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:10:37.794600 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:10:37.796575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:10:37.798482 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:10:37.800364 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:10:37.802208 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:10:37.804018 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:10:37.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.805602 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:10:37.805899 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:10:37.807510 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:37.815700 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:37.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.817097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:10:37.817517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:37.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.818878 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:10:37.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.819159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:10:37.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.821327 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:10:37.821659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:10:37.823440 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:10:37.823567 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:10:37.825354 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 01:10:37.825689 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:10:37.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.829072 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:10:37.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.836031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:10:37.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.836793 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:10:37.836982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:37.840785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:10:37.841013 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:37.844139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:10:37.844418 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:10:37.858864 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:10:37.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.859223 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:10:37.884580 ignition[1078]: INFO : Ignition 2.24.0 Jan 14 01:10:37.886964 ignition[1078]: INFO : Stage: umount Jan 14 01:10:37.886964 ignition[1078]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:37.886964 ignition[1078]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 14 01:10:37.889062 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:10:37.892213 ignition[1078]: INFO : umount: umount passed Jan 14 01:10:37.894077 ignition[1078]: INFO : Ignition finished successfully Jan 14 01:10:37.893991 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:10:37.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.894162 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:10:37.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.897038 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:10:37.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.897191 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:10:37.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.900039 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:10:37.900124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:10:37.901831 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:10:37.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.901905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:10:37.903746 systemd[1]: Stopped target network.target - Network. Jan 14 01:10:37.904559 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:10:37.904672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:10:37.908071 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:10:37.909245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:10:37.913952 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:37.915105 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:10:37.917073 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:10:37.918520 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:10:37.918600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:10:37.920033 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:10:37.920102 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:10:37.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.921517 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:10:37.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.921566 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:37.922957 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:10:37.923057 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:10:37.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.924508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:10:37.924588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:10:37.926253 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:10:37.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.927830 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:10:37.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.930033 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:10:37.930224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:10:37.934246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:10:37.934386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:10:37.937170 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:10:37.937323 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:10:37.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.943486 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:10:37.943687 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:10:37.948000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:10:37.949547 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:10:37.949000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:10:37.951348 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:10:37.951411 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.954823 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:10:37.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.955795 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:10:37.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.955913 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:10:37.956770 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:10:37.956877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:37.957657 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:10:37.957724 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:37.961417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:37.997904 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:10:37.999287 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:38.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.001811 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:10:38.003091 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:10:38.005296 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:10:38.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.005429 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:38.007130 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:10:38.007195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:38.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.008936 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:10:38.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.009029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:10:38.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.011304 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:10:38.011408 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:10:38.012950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:10:38.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.013038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:10:38.021155 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:10:38.025213 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:10:38.025329 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:38.026143 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:10:38.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.026212 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:38.030949 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:10:38.031065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:10:38.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.034947 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:10:38.035027 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:38.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.037959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:38.038048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:38.047399 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:10:38.047646 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:10:38.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:38.049642 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:10:38.052781 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:10:38.082119 systemd[1]: Switching root. Jan 14 01:10:38.136416 systemd-journald[299]: Journal stopped Jan 14 01:10:40.092392 systemd-journald[299]: Received SIGTERM from PID 1 (systemd). Jan 14 01:10:40.092503 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:10:40.092539 kernel: SELinux: policy capability open_perms=1 Jan 14 01:10:40.092570 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:10:40.092597 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:10:40.092622 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:10:40.092680 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:10:40.092707 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:10:40.092734 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:10:40.092761 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:10:40.092789 systemd[1]: Successfully loaded SELinux policy in 87.175ms. Jan 14 01:10:40.092851 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.089ms. Jan 14 01:10:40.092894 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:10:40.092925 systemd[1]: Detected virtualization kvm. Jan 14 01:10:40.092940 systemd[1]: Detected architecture x86-64. Jan 14 01:10:40.092963 systemd[1]: Detected first boot. Jan 14 01:10:40.092978 systemd[1]: Hostname set to . Jan 14 01:10:40.092996 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:10:40.093011 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 14 01:10:40.093029 kernel: audit: type=1334 audit(1768353038.543:89): prog-id=10 op=LOAD Jan 14 01:10:40.093056 kernel: audit: type=1334 audit(1768353038.545:90): prog-id=10 op=UNLOAD Jan 14 01:10:40.093069 kernel: audit: type=1334 audit(1768353038.545:91): prog-id=11 op=LOAD Jan 14 01:10:40.093082 kernel: audit: type=1334 audit(1768353038.545:92): prog-id=11 op=UNLOAD Jan 14 01:10:40.093098 zram_generator::config[1121]: No configuration found. Jan 14 01:10:40.093114 kernel: Guest personality initialized and is inactive Jan 14 01:10:40.093128 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:10:40.093147 kernel: Initialized host personality Jan 14 01:10:40.093162 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:10:40.093177 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:10:40.093192 kernel: audit: type=1334 audit(1768353039.524:93): prog-id=12 op=LOAD Jan 14 01:10:40.093204 kernel: audit: type=1334 audit(1768353039.524:94): prog-id=3 op=UNLOAD Jan 14 01:10:40.093218 kernel: audit: type=1334 audit(1768353039.524:95): prog-id=13 op=LOAD Jan 14 01:10:40.093231 kernel: audit: type=1334 audit(1768353039.524:96): prog-id=14 op=LOAD Jan 14 01:10:40.093251 kernel: audit: type=1334 audit(1768353039.524:97): prog-id=4 op=UNLOAD Jan 14 01:10:40.093264 kernel: audit: type=1334 audit(1768353039.524:98): prog-id=5 op=UNLOAD Jan 14 01:10:40.093277 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:10:40.093291 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:10:40.093305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:10:40.093326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:10:40.093342 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:10:40.093362 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:10:40.093377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:10:40.093391 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:10:40.093407 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:10:40.093422 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:10:40.093443 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:10:40.093458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:40.093473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:40.093487 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:10:40.093502 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:10:40.093516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:10:40.093530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:10:40.093551 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:10:40.093565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:40.093580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:40.093594 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:10:40.093611 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:10:40.093626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:10:40.093641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:10:40.093662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:40.093677 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:10:40.093691 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:10:40.093706 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:10:40.093742 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:10:40.093778 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:10:40.093807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:10:40.096940 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:10:40.097007 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:40.097038 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:10:40.097068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:40.097118 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:10:40.097149 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:10:40.097179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:40.097208 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:40.097269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:10:40.097307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:10:40.097332 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:10:40.097354 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:10:40.097374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:40.097393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:10:40.097426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:10:40.097457 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:10:40.097500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:10:40.097530 systemd[1]: Reached target machines.target - Containers. Jan 14 01:10:40.097560 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:10:40.097590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:40.097620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:10:40.097659 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:10:40.097689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:40.097720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:10:40.097751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:40.097781 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:10:40.097811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:40.097890 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:10:40.097938 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:10:40.097962 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:10:40.098000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:10:40.098051 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:10:40.098089 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:40.098125 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:10:40.098148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:10:40.098169 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:10:40.098203 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:10:40.098226 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:10:40.098242 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:10:40.098266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:40.098289 kernel: fuse: init (API version 7.41) Jan 14 01:10:40.098305 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:10:40.098320 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:10:40.098336 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:10:40.098350 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:10:40.098365 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:10:40.098380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:10:40.098402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:40.098416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:10:40.098431 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:10:40.098447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:40.098462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:40.098483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:40.098498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:40.098513 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:40.098535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:40.098552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:40.098568 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:10:40.098584 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:10:40.098611 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:10:40.098626 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:10:40.098641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:10:40.098683 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:10:40.098699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:10:40.098715 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:10:40.098730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:10:40.098745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:40.098760 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:40.098781 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:10:40.098796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:10:40.106957 systemd-journald[1195]: Collecting audit messages is enabled. Jan 14 01:10:40.107053 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:10:40.107089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:10:40.107121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:10:40.107186 systemd-journald[1195]: Journal started Jan 14 01:10:40.107321 systemd-journald[1195]: Runtime Journal (/run/log/journal/b1f6ce1f38d24c3298cad4c1b4a50d07) is 4.8M, max 39.1M, 34.2M free. Jan 14 01:10:39.644000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:10:39.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.863000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:10:39.863000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:10:39.867000 audit: BPF prog-id=15 op=LOAD Jan 14 01:10:39.867000 audit: BPF prog-id=16 op=LOAD Jan 14 01:10:39.867000 audit: BPF prog-id=17 op=LOAD Jan 14 01:10:39.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:39.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.083000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:10:40.115308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:10:40.083000 audit[1195]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd076e0540 a2=4000 a3=0 items=0 ppid=1 pid=1195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:40.083000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:10:39.503524 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:10:39.525959 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:10:39.529734 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:10:40.133953 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:10:40.134050 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:10:40.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.142227 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:40.143990 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:10:40.146423 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:10:40.147706 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:10:40.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.149052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:10:40.176785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:10:40.179417 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:10:40.191884 kernel: ACPI: bus type drm_connector registered Jan 14 01:10:40.190424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:10:40.196203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:10:40.197612 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:10:40.198956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:10:40.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.220884 kernel: loop1: detected capacity change from 0 to 8 Jan 14 01:10:40.220822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:40.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.244981 systemd-journald[1195]: Time spent on flushing to /var/log/journal/b1f6ce1f38d24c3298cad4c1b4a50d07 is 123.652ms for 1150 entries. Jan 14 01:10:40.244981 systemd-journald[1195]: System Journal (/var/log/journal/b1f6ce1f38d24c3298cad4c1b4a50d07) is 8M, max 163.5M, 155.5M free. Jan 14 01:10:40.384678 systemd-journald[1195]: Received client request to flush runtime journal. Jan 14 01:10:40.384766 kernel: loop2: detected capacity change from 0 to 219144 Jan 14 01:10:40.384801 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 01:10:40.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.246992 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:10:40.294241 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 14 01:10:40.294262 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 14 01:10:40.307990 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:10:40.312069 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:10:40.316229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:10:40.389109 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:10:40.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.398909 kernel: loop4: detected capacity change from 0 to 111560 Jan 14 01:10:40.419408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:40.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.448150 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:10:40.448900 kernel: loop5: detected capacity change from 0 to 8 Jan 14 01:10:40.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.453892 kernel: loop6: detected capacity change from 0 to 219144 Jan 14 01:10:40.453000 audit: BPF prog-id=18 op=LOAD Jan 14 01:10:40.454000 audit: BPF prog-id=19 op=LOAD Jan 14 01:10:40.454000 audit: BPF prog-id=20 op=LOAD Jan 14 01:10:40.457138 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:10:40.460000 audit: BPF prog-id=21 op=LOAD Jan 14 01:10:40.465204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:10:40.471811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:10:40.480901 kernel: loop7: detected capacity change from 0 to 50784 Jan 14 01:10:40.483000 audit: BPF prog-id=22 op=LOAD Jan 14 01:10:40.483000 audit: BPF prog-id=23 op=LOAD Jan 14 01:10:40.483000 audit: BPF prog-id=24 op=LOAD Jan 14 01:10:40.487108 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:10:40.488000 audit: BPF prog-id=25 op=LOAD Jan 14 01:10:40.488000 audit: BPF prog-id=26 op=LOAD Jan 14 01:10:40.489000 audit: BPF prog-id=27 op=LOAD Jan 14 01:10:40.492514 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:10:40.524877 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:10:40.537456 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:10:40.544308 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 14 01:10:40.544634 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 14 01:10:40.555131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:40.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:40.566931 (sd-merge)[1273]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Jan 14 01:10:40.587788 (sd-merge)[1273]: Merged extensions into '/usr'. Jan 14 01:10:40.599077 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:10:40.599103 systemd[1]: Reloading... Jan 14 01:10:40.708848 systemd-nsresourced[1278]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:10:40.811887 zram_generator::config[1325]: No configuration found. Jan 14 01:10:40.934715 systemd-oomd[1275]: No swap; memory pressure usage will be degraded Jan 14 01:10:41.072895 systemd-resolved[1276]: Positive Trust Anchors: Jan 14 01:10:41.072910 systemd-resolved[1276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:10:41.072915 systemd-resolved[1276]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:10:41.072955 systemd-resolved[1276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:10:41.098316 systemd-resolved[1276]: Using system hostname 'ci-4578.0.0-p-c80f5dee3b'. Jan 14 01:10:41.314103 systemd[1]: Reloading finished in 714 ms. Jan 14 01:10:41.338821 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:10:41.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.340582 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:10:41.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.342219 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:10:41.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.343690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:10:41.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.345295 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:10:41.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.351857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:41.361134 systemd[1]: Starting ensure-sysext.service... Jan 14 01:10:41.366250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:10:41.369000 audit: BPF prog-id=28 op=LOAD Jan 14 01:10:41.369000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:10:41.369000 audit: BPF prog-id=29 op=LOAD Jan 14 01:10:41.369000 audit: BPF prog-id=30 op=LOAD Jan 14 01:10:41.369000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:10:41.369000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:10:41.373000 audit: BPF prog-id=31 op=LOAD Jan 14 01:10:41.373000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:10:41.374000 audit: BPF prog-id=32 op=LOAD Jan 14 01:10:41.374000 audit: BPF prog-id=33 op=LOAD Jan 14 01:10:41.374000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:10:41.374000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:10:41.374000 audit: BPF prog-id=34 op=LOAD Jan 14 01:10:41.374000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:10:41.375000 audit: BPF prog-id=35 op=LOAD Jan 14 01:10:41.376000 audit: BPF prog-id=36 op=LOAD Jan 14 01:10:41.376000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:10:41.376000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:10:41.377000 audit: BPF prog-id=37 op=LOAD Jan 14 01:10:41.377000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:10:41.378000 audit: BPF prog-id=38 op=LOAD Jan 14 01:10:41.379000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:10:41.379000 audit: BPF prog-id=39 op=LOAD Jan 14 01:10:41.379000 audit: BPF prog-id=40 op=LOAD Jan 14 01:10:41.379000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:10:41.379000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:10:41.412606 systemd[1]: Reload requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:10:41.412800 systemd[1]: Reloading... Jan 14 01:10:41.438637 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:10:41.438719 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:10:41.439475 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:10:41.447110 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 14 01:10:41.447231 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 14 01:10:41.467081 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:10:41.467101 systemd-tmpfiles[1366]: Skipping /boot Jan 14 01:10:41.502977 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:10:41.502994 systemd-tmpfiles[1366]: Skipping /boot Jan 14 01:10:41.594915 zram_generator::config[1398]: No configuration found. Jan 14 01:10:42.031735 systemd[1]: Reloading finished in 618 ms. Jan 14 01:10:42.045696 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:10:42.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.051000 audit: BPF prog-id=41 op=LOAD Jan 14 01:10:42.051000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:10:42.051000 audit: BPF prog-id=42 op=LOAD Jan 14 01:10:42.051000 audit: BPF prog-id=43 op=LOAD Jan 14 01:10:42.051000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:10:42.051000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:10:42.053000 audit: BPF prog-id=44 op=LOAD Jan 14 01:10:42.053000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:10:42.053000 audit: BPF prog-id=45 op=LOAD Jan 14 01:10:42.053000 audit: BPF prog-id=46 op=LOAD Jan 14 01:10:42.053000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:10:42.053000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:10:42.054000 audit: BPF prog-id=47 op=LOAD Jan 14 01:10:42.054000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:10:42.055000 audit: BPF prog-id=48 op=LOAD Jan 14 01:10:42.057000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:10:42.057000 audit: BPF prog-id=49 op=LOAD Jan 14 01:10:42.057000 audit: BPF prog-id=50 op=LOAD Jan 14 01:10:42.057000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:10:42.057000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:10:42.058000 audit: BPF prog-id=51 op=LOAD Jan 14 01:10:42.058000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:10:42.059000 audit: BPF prog-id=52 op=LOAD Jan 14 01:10:42.059000 audit: BPF prog-id=53 op=LOAD Jan 14 01:10:42.059000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:10:42.059000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:10:42.064196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:42.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.082044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.084579 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:10:42.089375 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:10:42.090705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:42.096607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:42.102273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:42.107577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:42.109034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:42.109492 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:42.118533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:10:42.119978 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:42.124739 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:10:42.126000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:10:42.126000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:10:42.128000 audit: BPF prog-id=54 op=LOAD Jan 14 01:10:42.128000 audit: BPF prog-id=55 op=LOAD Jan 14 01:10:42.134479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:42.141473 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:10:42.142581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.152175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:42.152679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:42.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.156423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:42.157323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:42.167543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.168814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:42.181417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:42.186264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:42.188253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:42.188679 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:42.188801 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:42.190134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.191694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:42.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.192715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:42.205088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.205657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:42.208414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:10:42.215117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:42.216539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:42.217022 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:42.217221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:42.217478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.226362 systemd[1]: Finished ensure-sysext.service. Jan 14 01:10:42.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.231000 audit: BPF prog-id=56 op=LOAD Jan 14 01:10:42.234135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:10:42.269000 audit[1456]: SYSTEM_BOOT pid=1456 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.284214 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:10:42.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.314074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:42.314551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:42.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.339304 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:10:42.339665 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:10:42.342318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:42.342555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:42.346376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:10:42.352109 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:42.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.352395 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:42.355358 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:10:42.366949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:10:42.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:42.370087 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:10:42.391000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:10:42.391000 audit[1488]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddbb13b50 a2=420 a3=0 items=0 ppid=1443 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:42.391000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:10:42.393210 augenrules[1488]: No rules Jan 14 01:10:42.393544 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:10:42.398524 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:10:42.398981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:10:42.406187 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Jan 14 01:10:42.459355 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:10:42.461243 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:10:42.494602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:42.501049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:10:42.715403 systemd-networkd[1500]: lo: Link UP Jan 14 01:10:42.715417 systemd-networkd[1500]: lo: Gained carrier Jan 14 01:10:42.721175 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:10:42.723038 systemd[1]: Reached target network.target - Network. Jan 14 01:10:42.728781 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:10:42.734242 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:10:42.834639 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:10:42.855117 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:10:42.884670 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jan 14 01:10:42.890183 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 14 01:10:42.891841 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.892187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:42.895892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:42.907824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:42.916289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:42.919087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:42.919299 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:42.919347 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:42.919396 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:10:42.919419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:42.937245 systemd-networkd[1500]: eth1: Configuring with /run/systemd/network/10-b6:b5:5b:63:53:c7.network. Jan 14 01:10:42.949950 systemd-networkd[1500]: eth1: Link UP Jan 14 01:10:42.954299 systemd-networkd[1500]: eth1: Gained carrier Jan 14 01:10:42.968011 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:10:42.976411 kernel: ISO 9660 Extensions: RRIP_1991A Jan 14 01:10:42.985964 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 14 01:10:43.015828 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:10:43.027775 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:43.032163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:43.035548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:43.036825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:43.041148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:10:43.044926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:43.045344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:43.050004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:10:43.095902 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:10:43.106126 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:10:43.109014 systemd-networkd[1500]: eth0: Configuring with /run/systemd/network/10-aa:ea:e9:8b:2d:2b.network. Jan 14 01:10:43.110959 systemd-networkd[1500]: eth0: Link UP Jan 14 01:10:43.111258 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:10:43.113160 systemd-networkd[1500]: eth0: Gained carrier Jan 14 01:10:43.118762 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:10:43.193872 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 14 01:10:43.198182 ldconfig[1448]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:10:43.208911 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:10:43.211667 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:10:43.216689 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:10:43.265699 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:10:43.268328 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:10:43.270376 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:10:43.272200 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:10:43.274033 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:10:43.276309 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:10:43.278203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:10:43.279957 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:10:43.282063 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:10:43.282810 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:10:43.285783 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:10:43.285824 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:10:43.286558 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:10:43.288637 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:10:43.294081 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:10:43.304997 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:10:43.308191 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:10:43.309570 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:10:43.320183 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:10:43.323039 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:10:43.325902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:10:43.329506 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:10:43.330743 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:10:43.332344 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:10:43.332378 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:10:43.334756 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:10:43.339113 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:10:43.344705 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:10:43.352761 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:10:43.359160 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:10:43.363469 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:10:43.364964 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:10:43.372201 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:10:43.378970 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:10:43.386187 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:10:43.392071 jq[1560]: false Jan 14 01:10:43.393283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:10:43.405154 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:10:43.428251 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:10:43.430959 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:10:43.431798 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:10:43.435200 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:10:43.442798 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:10:43.453585 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 14 01:10:43.454507 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:10:43.456446 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:10:43.457285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:10:43.460600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:10:43.461011 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:10:43.467923 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jan 14 01:10:43.470206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:10:43.489021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:10:43.511874 update_engine[1570]: I20260114 01:10:43.509627 1570 main.cc:92] Flatcar Update Engine starting Jan 14 01:10:43.520697 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 14 01:10:43.520697 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:10:43.520697 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 14 01:10:43.512599 oslogin_cache_refresh[1562]: Failure getting users, quitting Jan 14 01:10:43.512632 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:10:43.512715 oslogin_cache_refresh[1562]: Refreshing group entry cache Jan 14 01:10:43.525645 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 14 01:10:43.525961 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jan 14 01:10:43.527088 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:10:43.526005 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:10:43.539343 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:10:43.539742 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:10:43.548901 extend-filesystems[1561]: Found /dev/vda6 Jan 14 01:10:43.551010 jq[1571]: true Jan 14 01:10:43.583790 extend-filesystems[1561]: Found /dev/vda9 Jan 14 01:10:43.609345 extend-filesystems[1561]: Checking size of /dev/vda9 Jan 14 01:10:43.606177 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:10:43.623671 tar[1575]: linux-amd64/LICENSE Jan 14 01:10:43.623671 tar[1575]: linux-amd64/helm Jan 14 01:10:43.608999 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:10:43.662012 jq[1598]: true Jan 14 01:10:43.704175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:10:43.713732 extend-filesystems[1561]: Resized partition /dev/vda9 Jan 14 01:10:43.715704 dbus-daemon[1558]: [system] SELinux support is enabled Jan 14 01:10:43.720936 coreos-metadata[1557]: Jan 14 01:10:43.719 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 14 01:10:43.725660 extend-filesystems[1614]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:10:43.727122 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:10:43.739968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:10:43.740030 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:10:43.741178 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:10:43.741393 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 14 01:10:43.741430 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:10:43.746035 coreos-metadata[1557]: Jan 14 01:10:43.744 INFO Fetch successful Jan 14 01:10:43.760343 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Jan 14 01:10:43.775256 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:10:43.777999 update_engine[1570]: I20260114 01:10:43.775343 1570 update_check_scheduler.cc:74] Next update check in 6m24s Jan 14 01:10:43.782532 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:10:43.789185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:43.935766 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:10:43.962284 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:10:43.970258 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:10:43.974773 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:10:43.978337 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:10:43.988142 systemd[1]: Starting sshkeys.service... Jan 14 01:10:44.117324 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 14 01:10:44.154909 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 01:10:44.168396 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 01:10:44.190702 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:10:44.190702 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 14 01:10:44.190702 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 14 01:10:44.282085 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 14 01:10:44.282164 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 14 01:10:44.282595 kernel: Console: switching to colour dummy device 80x25 Jan 14 01:10:44.282624 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 01:10:44.282659 kernel: [drm] features: -context_init Jan 14 01:10:44.282695 kernel: [drm] number of scanouts: 1 Jan 14 01:10:44.282718 kernel: [drm] number of cap sets: 0 Jan 14 01:10:44.282744 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jan 14 01:10:44.282777 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Jan 14 01:10:44.423644 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 14 01:10:44.423710 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:10:44.423741 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:10:44.423781 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 01:10:44.192088 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:10:44.424511 containerd[1591]: time="2026-01-14T01:10:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:10:44.424511 containerd[1591]: time="2026-01-14T01:10:44.387399014Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:10:44.192677 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:10:44.430195 coreos-metadata[1653]: Jan 14 01:10:44.396 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 14 01:10:44.430195 coreos-metadata[1653]: Jan 14 01:10:44.427 INFO Fetch successful Jan 14 01:10:44.262095 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:10:44.427208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:10:44.449107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:44.462278 containerd[1591]: time="2026-01-14T01:10:44.460958644Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.155µs" Jan 14 01:10:44.462278 containerd[1591]: time="2026-01-14T01:10:44.462072483Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:10:44.462278 containerd[1591]: time="2026-01-14T01:10:44.462159525Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:10:44.462278 containerd[1591]: time="2026-01-14T01:10:44.462178003Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:10:44.462926 unknown[1653]: wrote ssh authorized keys file for user: core Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466164122Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466225682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466339029Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466375091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466730165Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466759157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466780316Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.466794087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.467048262Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.467066913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.467165282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.468607 containerd[1591]: time="2026-01-14T01:10:44.467382715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.469359 containerd[1591]: time="2026-01-14T01:10:44.467416950Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:10:44.469359 containerd[1591]: time="2026-01-14T01:10:44.467443117Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:10:44.474443 containerd[1591]: time="2026-01-14T01:10:44.473491994Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:10:44.474443 containerd[1591]: time="2026-01-14T01:10:44.474001426Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:10:44.474443 containerd[1591]: time="2026-01-14T01:10:44.474136617Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:10:44.481231 systemd-logind[1569]: New seat seat0. Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485269612Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485380360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485546706Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485569819Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485740986Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485790557Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:10:44.485686 containerd[1591]: time="2026-01-14T01:10:44.485810875Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:10:44.485348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.485828381Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486713915Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486747459Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486767517Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486811151Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486865536Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.486899532Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487165243Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487207616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487230413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487289330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487311184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487329018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487348317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:10:44.487971 containerd[1591]: time="2026-01-14T01:10:44.487370128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487389283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487420781Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487441369Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487500854Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487568214Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:10:44.488601 containerd[1591]: time="2026-01-14T01:10:44.487587242Z" level=info msg="Start snapshots syncer" Jan 14 01:10:44.491984 containerd[1591]: time="2026-01-14T01:10:44.491655196Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:10:44.495054 containerd[1591]: time="2026-01-14T01:10:44.493364916Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:10:44.495054 containerd[1591]: time="2026-01-14T01:10:44.493477389Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:10:44.494144 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:10:44.495443 containerd[1591]: time="2026-01-14T01:10:44.493566024Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:10:44.495443 containerd[1591]: time="2026-01-14T01:10:44.493790903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:10:44.494175 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:10:44.497518 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.493827716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.498782949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.498912641Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.498952572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.498974316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.498998146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499018279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499036409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499086974Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499111311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499124368Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499139049Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499150537Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:10:44.500993 containerd[1591]: time="2026-01-14T01:10:44.499164500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499179879Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499198863Z" level=info msg="runtime interface created" Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499206422Z" level=info msg="created NRI interface" Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499218461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499240122Z" level=info msg="Connect containerd service" Jan 14 01:10:44.501693 containerd[1591]: time="2026-01-14T01:10:44.499268214Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:10:44.507318 containerd[1591]: time="2026-01-14T01:10:44.503283118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:10:44.509958 systemd[1]: Started sshd@0-143.198.154.109:22-4.153.228.146:50476.service - OpenSSH per-connection server daemon (4.153.228.146:50476). Jan 14 01:10:44.512336 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:10:44.539701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:44.541442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:44.541624 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:44.554056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:44.611879 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:10:44.616023 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:10:44.646594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:10:44.653099 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:10:44.670674 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 01:10:44.686960 systemd[1]: Finished sshkeys.service. Jan 14 01:10:44.769575 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:10:44.805959 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:10:44.817065 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:10:44.817746 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:10:44.828226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:44.829972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:44.847861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:44.868045 systemd-networkd[1500]: eth1: Gained IPv6LL Jan 14 01:10:44.870056 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:10:44.881556 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:10:44.887796 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:10:44.893281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:44.901789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:10:44.968462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:45.064088 containerd[1591]: time="2026-01-14T01:10:45.058343111Z" level=info msg="Start subscribing containerd event" Jan 14 01:10:45.064791 systemd-networkd[1500]: eth0: Gained IPv6LL Jan 14 01:10:45.066037 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.066259828Z" level=info msg="Start recovering state" Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067141048Z" level=info msg="Start event monitor" Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067172036Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067195103Z" level=info msg="Start streaming server" Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067209683Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067221563Z" level=info msg="runtime interface starting up..." Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067238352Z" level=info msg="starting plugins..." Jan 14 01:10:45.067917 containerd[1591]: time="2026-01-14T01:10:45.067261061Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:10:45.076921 containerd[1591]: time="2026-01-14T01:10:45.075552778Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:10:45.076921 containerd[1591]: time="2026-01-14T01:10:45.075682036Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:10:45.076921 containerd[1591]: time="2026-01-14T01:10:45.075788574Z" level=info msg="containerd successfully booted in 0.707799s" Jan 14 01:10:45.076009 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:10:45.099787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:10:45.110896 sshd[1675]: Accepted publickey for core from 4.153.228.146 port 50476 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:45.121329 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:45.160158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:10:45.165494 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:10:45.200239 systemd-logind[1569]: New session 1 of user core. Jan 14 01:10:45.218934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:10:45.230495 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:10:45.272305 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:45.281083 systemd-logind[1569]: New session 2 of user core. Jan 14 01:10:45.608417 systemd[1727]: Queued start job for default target default.target. Jan 14 01:10:45.617790 systemd[1727]: Created slice app.slice - User Application Slice. Jan 14 01:10:45.618047 systemd[1727]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:10:45.618916 systemd[1727]: Reached target paths.target - Paths. Jan 14 01:10:45.619014 systemd[1727]: Reached target timers.target - Timers. Jan 14 01:10:45.622244 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:10:45.624259 systemd[1727]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:10:45.657862 tar[1575]: linux-amd64/README.md Jan 14 01:10:45.687096 systemd[1727]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:10:45.691456 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:10:45.691684 systemd[1727]: Reached target sockets.target - Sockets. Jan 14 01:10:45.691766 systemd[1727]: Reached target basic.target - Basic System. Jan 14 01:10:45.691832 systemd[1727]: Reached target default.target - Main User Target. Jan 14 01:10:45.692483 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:10:45.692728 systemd[1727]: Startup finished in 393ms. Jan 14 01:10:45.696662 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:10:45.707335 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:10:45.918338 systemd[1]: Started sshd@1-143.198.154.109:22-4.153.228.146:50492.service - OpenSSH per-connection server daemon (4.153.228.146:50492). Jan 14 01:10:46.308130 sshd[1744]: Accepted publickey for core from 4.153.228.146 port 50492 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:46.312045 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:46.326935 systemd-logind[1569]: New session 3 of user core. Jan 14 01:10:46.332276 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:10:46.515889 sshd[1748]: Connection closed by 4.153.228.146 port 50492 Jan 14 01:10:46.516728 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:46.527823 systemd[1]: sshd@1-143.198.154.109:22-4.153.228.146:50492.service: Deactivated successfully. Jan 14 01:10:46.532006 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:10:46.537055 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:10:46.540017 systemd-logind[1569]: Removed session 3. Jan 14 01:10:46.591983 systemd[1]: Started sshd@2-143.198.154.109:22-4.153.228.146:50498.service - OpenSSH per-connection server daemon (4.153.228.146:50498). Jan 14 01:10:46.669168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:46.674346 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:10:46.677022 systemd[1]: Startup finished in 3.443s (kernel) + 6.685s (initrd) + 8.324s (userspace) = 18.453s. Jan 14 01:10:46.686476 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:46.981174 sshd[1754]: Accepted publickey for core from 4.153.228.146 port 50498 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:46.982813 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:46.996975 systemd-logind[1569]: New session 4 of user core. Jan 14 01:10:47.007299 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:10:47.175489 sshd[1768]: Connection closed by 4.153.228.146 port 50498 Jan 14 01:10:47.176162 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:47.191087 systemd[1]: sshd@2-143.198.154.109:22-4.153.228.146:50498.service: Deactivated successfully. Jan 14 01:10:47.195492 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:10:47.198404 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:10:47.201615 systemd-logind[1569]: Removed session 4. Jan 14 01:10:47.500640 kubelet[1762]: E0114 01:10:47.500536 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:47.504456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:47.504681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:47.505258 systemd[1]: kubelet.service: Consumed 1.436s CPU time, 258.5M memory peak. Jan 14 01:10:57.255095 systemd[1]: Started sshd@3-143.198.154.109:22-4.153.228.146:45412.service - OpenSSH per-connection server daemon (4.153.228.146:45412). Jan 14 01:10:57.559344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:10:57.562050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:57.644896 sshd[1781]: Accepted publickey for core from 4.153.228.146 port 45412 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:57.648987 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:57.658433 systemd-logind[1569]: New session 5 of user core. Jan 14 01:10:57.665204 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:10:57.809750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:57.823809 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:57.849855 sshd[1788]: Connection closed by 4.153.228.146 port 45412 Jan 14 01:10:57.850381 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:57.861368 systemd[1]: sshd@3-143.198.154.109:22-4.153.228.146:45412.service: Deactivated successfully. Jan 14 01:10:57.865770 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:10:57.868043 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:10:57.871268 systemd-logind[1569]: Removed session 5. Jan 14 01:10:57.890566 kubelet[1795]: E0114 01:10:57.890501 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:57.895688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:57.896517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:57.897513 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.4M memory peak. Jan 14 01:10:57.929724 systemd[1]: Started sshd@4-143.198.154.109:22-4.153.228.146:45426.service - OpenSSH per-connection server daemon (4.153.228.146:45426). Jan 14 01:10:58.332461 sshd[1807]: Accepted publickey for core from 4.153.228.146 port 45426 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:58.333638 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:58.340571 systemd-logind[1569]: New session 6 of user core. Jan 14 01:10:58.351364 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:10:58.528329 sshd[1811]: Connection closed by 4.153.228.146 port 45426 Jan 14 01:10:58.528206 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:58.534668 systemd[1]: sshd@4-143.198.154.109:22-4.153.228.146:45426.service: Deactivated successfully. Jan 14 01:10:58.537871 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:10:58.540613 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:10:58.542764 systemd-logind[1569]: Removed session 6. Jan 14 01:10:58.621088 systemd[1]: Started sshd@5-143.198.154.109:22-4.153.228.146:45440.service - OpenSSH per-connection server daemon (4.153.228.146:45440). Jan 14 01:10:59.033103 sshd[1817]: Accepted publickey for core from 4.153.228.146 port 45440 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:59.035249 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:59.044531 systemd-logind[1569]: New session 7 of user core. Jan 14 01:10:59.052258 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:10:59.248392 sshd[1821]: Connection closed by 4.153.228.146 port 45440 Jan 14 01:10:59.249145 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:59.256498 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:10:59.257609 systemd[1]: sshd@5-143.198.154.109:22-4.153.228.146:45440.service: Deactivated successfully. Jan 14 01:10:59.260707 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:10:59.263447 systemd-logind[1569]: Removed session 7. Jan 14 01:10:59.327571 systemd[1]: Started sshd@6-143.198.154.109:22-4.153.228.146:45448.service - OpenSSH per-connection server daemon (4.153.228.146:45448). Jan 14 01:10:59.694482 sshd[1827]: Accepted publickey for core from 4.153.228.146 port 45448 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:10:59.696775 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:59.706679 systemd-logind[1569]: New session 8 of user core. Jan 14 01:10:59.728272 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:10:59.841498 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:10:59.842010 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:10:59.856116 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 14 01:10:59.917147 sshd[1831]: Connection closed by 4.153.228.146 port 45448 Jan 14 01:10:59.918335 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:59.928613 systemd[1]: sshd@6-143.198.154.109:22-4.153.228.146:45448.service: Deactivated successfully. Jan 14 01:10:59.932686 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:10:59.936068 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:10:59.937352 systemd-logind[1569]: Removed session 8. Jan 14 01:11:00.009267 systemd[1]: Started sshd@7-143.198.154.109:22-4.153.228.146:45458.service - OpenSSH per-connection server daemon (4.153.228.146:45458). Jan 14 01:11:00.420886 sshd[1839]: Accepted publickey for core from 4.153.228.146 port 45458 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:11:00.423782 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:00.431364 systemd-logind[1569]: New session 9 of user core. Jan 14 01:11:00.440630 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:11:00.570195 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:11:00.570684 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:11:00.575690 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 14 01:11:00.588075 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:11:00.588596 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:11:00.602627 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:11:00.666000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:11:00.668340 kernel: kauditd_printk_skb: 137 callbacks suppressed Jan 14 01:11:00.668500 kernel: audit: type=1305 audit(1768353060.666:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:11:00.670868 augenrules[1869]: No rules Jan 14 01:11:00.666000 audit[1869]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21743b20 a2=420 a3=0 items=0 ppid=1850 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.675541 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:11:00.676575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:11:00.678401 kernel: audit: type=1300 audit(1768353060.666:232): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21743b20 a2=420 a3=0 items=0 ppid=1850 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.683516 kernel: audit: type=1327 audit(1768353060.666:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:11:00.666000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:11:00.682221 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 14 01:11:00.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.692355 kernel: audit: type=1130 audit(1768353060.677:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.692489 kernel: audit: type=1131 audit(1768353060.678:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.692568 kernel: audit: type=1106 audit(1768353060.681:235): pid=1844 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.681000 audit[1844]: USER_END pid=1844 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.701982 kernel: audit: type=1104 audit(1768353060.681:236): pid=1844 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.681000 audit[1844]: CRED_DISP pid=1844 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.751637 sshd[1843]: Connection closed by 4.153.228.146 port 45458 Jan 14 01:11:00.752200 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:00.754000 audit[1839]: USER_END pid=1839 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:00.761880 kernel: audit: type=1106 audit(1768353060.754:237): pid=1839 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:00.762188 systemd[1]: sshd@7-143.198.154.109:22-4.153.228.146:45458.service: Deactivated successfully. Jan 14 01:11:00.754000 audit[1839]: CRED_DISP pid=1839 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:00.767894 kernel: audit: type=1104 audit(1768353060.754:238): pid=1839 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:00.765999 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:11:00.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-143.198.154.109:22-4.153.228.146:45458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.773669 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:11:00.773883 kernel: audit: type=1131 audit(1768353060.761:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-143.198.154.109:22-4.153.228.146:45458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.775797 systemd-logind[1569]: Removed session 9. Jan 14 01:11:00.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.154.109:22-4.153.228.146:45472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.829451 systemd[1]: Started sshd@8-143.198.154.109:22-4.153.228.146:45472.service - OpenSSH per-connection server daemon (4.153.228.146:45472). Jan 14 01:11:01.204000 audit[1878]: USER_ACCT pid=1878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:01.206101 sshd[1878]: Accepted publickey for core from 4.153.228.146 port 45472 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:11:01.207000 audit[1878]: CRED_ACQ pid=1878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:01.207000 audit[1878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce0506860 a2=3 a3=0 items=0 ppid=1 pid=1878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:01.207000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:01.209164 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:01.218762 systemd-logind[1569]: New session 10 of user core. Jan 14 01:11:01.229158 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:11:01.232000 audit[1878]: USER_START pid=1878 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:01.235000 audit[1882]: CRED_ACQ pid=1882 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:01.339036 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:11:01.339679 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:11:01.337000 audit[1883]: USER_ACCT pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:01.338000 audit[1883]: CRED_REFR pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:01.338000 audit[1883]: USER_START pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:01.958015 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:11:01.986433 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:11:02.470868 dockerd[1902]: time="2026-01-14T01:11:02.470700716Z" level=info msg="Starting up" Jan 14 01:11:02.473002 dockerd[1902]: time="2026-01-14T01:11:02.472861497Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:11:02.494981 dockerd[1902]: time="2026-01-14T01:11:02.494828610Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:11:02.534662 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4058843265-merged.mount: Deactivated successfully. Jan 14 01:11:02.578867 dockerd[1902]: time="2026-01-14T01:11:02.578743989Z" level=info msg="Loading containers: start." Jan 14 01:11:02.595008 kernel: Initializing XFRM netlink socket Jan 14 01:11:02.684000 audit[1951]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.684000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc73132fc0 a2=0 a3=0 items=0 ppid=1902 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:11:02.687000 audit[1953]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.687000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd781674d0 a2=0 a3=0 items=0 ppid=1902 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.687000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:11:02.691000 audit[1955]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.691000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd05fe070 a2=0 a3=0 items=0 ppid=1902 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:11:02.695000 audit[1957]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.695000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe26c493b0 a2=0 a3=0 items=0 ppid=1902 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:11:02.699000 audit[1959]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.699000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfdedfaf0 a2=0 a3=0 items=0 ppid=1902 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:11:02.703000 audit[1961]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.703000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea5a204b0 a2=0 a3=0 items=0 ppid=1902 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:02.710000 audit[1963]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.710000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff45cb3060 a2=0 a3=0 items=0 ppid=1902 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:02.714000 audit[1965]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.714000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffddda8c310 a2=0 a3=0 items=0 ppid=1902 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:11:02.760000 audit[1968]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.760000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffccefbba80 a2=0 a3=0 items=0 ppid=1902 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:11:02.763000 audit[1970]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.763000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe76102530 a2=0 a3=0 items=0 ppid=1902 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:11:02.767000 audit[1972]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.767000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe8bd97820 a2=0 a3=0 items=0 ppid=1902 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.767000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:11:02.770000 audit[1974]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.770000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd061b1a60 a2=0 a3=0 items=0 ppid=1902 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:02.774000 audit[1976]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.774000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc046cc830 a2=0 a3=0 items=0 ppid=1902 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.774000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:11:02.835000 audit[2006]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.835000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe0e858a20 a2=0 a3=0 items=0 ppid=1902 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:11:02.839000 audit[2008]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.839000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdc0998080 a2=0 a3=0 items=0 ppid=1902 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:11:02.842000 audit[2010]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.842000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc931bc720 a2=0 a3=0 items=0 ppid=1902 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:11:02.845000 audit[2012]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.845000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccb4b2870 a2=0 a3=0 items=0 ppid=1902 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:11:02.848000 audit[2014]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.848000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffccbc661e0 a2=0 a3=0 items=0 ppid=1902 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.848000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:11:02.851000 audit[2016]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.851000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcd8f29400 a2=0 a3=0 items=0 ppid=1902 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:02.854000 audit[2018]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.854000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe8562ec20 a2=0 a3=0 items=0 ppid=1902 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:02.857000 audit[2020]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.857000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd93f83d0 a2=0 a3=0 items=0 ppid=1902 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:11:02.861000 audit[2022]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.861000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc1f224cd0 a2=0 a3=0 items=0 ppid=1902 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.861000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:11:02.865000 audit[2024]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.865000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd450eba10 a2=0 a3=0 items=0 ppid=1902 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:11:02.868000 audit[2026]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.868000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd60660b80 a2=0 a3=0 items=0 ppid=1902 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.868000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:11:02.871000 audit[2028]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.871000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff732fd010 a2=0 a3=0 items=0 ppid=1902 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:02.875000 audit[2030]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.875000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde3f1d940 a2=0 a3=0 items=0 ppid=1902 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:11:02.883000 audit[2035]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.883000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc81ea0a0 a2=0 a3=0 items=0 ppid=1902 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:11:02.886000 audit[2037]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.886000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc85732290 a2=0 a3=0 items=0 ppid=1902 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:11:02.890000 audit[2039]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.890000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffed9ea3a0 a2=0 a3=0 items=0 ppid=1902 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:11:02.893000 audit[2041]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.893000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff271faca0 a2=0 a3=0 items=0 ppid=1902 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:11:02.897000 audit[2043]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.897000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff7e449800 a2=0 a3=0 items=0 ppid=1902 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:11:02.900000 audit[2045]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:02.900000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc5522a610 a2=0 a3=0 items=0 ppid=1902 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:11:02.917983 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Jan 14 01:11:02.945000 audit[2051]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.945000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe863d8010 a2=0 a3=0 items=0 ppid=1902 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:11:02.949000 audit[2053]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.949000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5ad2b0a0 a2=0 a3=0 items=0 ppid=1902 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.949000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:11:02.963000 audit[2061]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.963000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe2cffa7b0 a2=0 a3=0 items=0 ppid=1902 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:11:02.977000 audit[2067]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.977000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdcd1d3870 a2=0 a3=0 items=0 ppid=1902 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:11:02.981000 audit[2069]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.981000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc6709c650 a2=0 a3=0 items=0 ppid=1902 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:11:02.985000 audit[2071]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.985000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2b44b990 a2=0 a3=0 items=0 ppid=1902 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:11:02.988000 audit[2073]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:02.988000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffed179fc50 a2=0 a3=0 items=0 ppid=1902 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:02.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:03.545347 systemd-resolved[1276]: Clock change detected. Flushing caches. Jan 14 01:11:03.546765 systemd-timesyncd[1469]: Contacted time server 144.202.0.197:123 (2.flatcar.pool.ntp.org). Jan 14 01:11:03.546850 systemd-timesyncd[1469]: Initial clock synchronization to Wed 2026-01-14 01:11:03.545268 UTC. Jan 14 01:11:03.548000 audit[2075]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:03.548000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeb0f97890 a2=0 a3=0 items=0 ppid=1902 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:03.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:11:03.550167 systemd-networkd[1500]: docker0: Link UP Jan 14 01:11:03.559244 dockerd[1902]: time="2026-01-14T01:11:03.559157280Z" level=info msg="Loading containers: done." Jan 14 01:11:03.591611 dockerd[1902]: time="2026-01-14T01:11:03.591446974Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:11:03.591611 dockerd[1902]: time="2026-01-14T01:11:03.591571304Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:11:03.593964 dockerd[1902]: time="2026-01-14T01:11:03.593844569Z" level=info msg="Initializing buildkit" Jan 14 01:11:03.632285 dockerd[1902]: time="2026-01-14T01:11:03.632211545Z" level=info msg="Completed buildkit initialization" Jan 14 01:11:03.642696 dockerd[1902]: time="2026-01-14T01:11:03.642101496Z" level=info msg="Daemon has completed initialization" Jan 14 01:11:03.642696 dockerd[1902]: time="2026-01-14T01:11:03.642211692Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:11:03.643122 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:11:03.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:04.082385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2375218953-merged.mount: Deactivated successfully. Jan 14 01:11:04.666432 containerd[1591]: time="2026-01-14T01:11:04.666374787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 01:11:05.873921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544225173.mount: Deactivated successfully. Jan 14 01:11:07.055418 containerd[1591]: time="2026-01-14T01:11:07.055327640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.059721 containerd[1591]: time="2026-01-14T01:11:07.059362800Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=31240" Jan 14 01:11:07.062453 containerd[1591]: time="2026-01-14T01:11:07.062372251Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.069359 containerd[1591]: time="2026-01-14T01:11:07.069251049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.077747 containerd[1591]: time="2026-01-14T01:11:07.077164397Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.409838366s" Jan 14 01:11:07.077747 containerd[1591]: time="2026-01-14T01:11:07.077239090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 14 01:11:07.079130 containerd[1591]: time="2026-01-14T01:11:07.079074236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 01:11:08.614127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:11:08.619730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:08.867742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:08.872871 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 01:11:08.872993 kernel: audit: type=1130 audit(1768353068.868:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:08.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:08.893415 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:08.962069 containerd[1591]: time="2026-01-14T01:11:08.961980325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:08.966596 containerd[1591]: time="2026-01-14T01:11:08.966536020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 14 01:11:08.985114 kubelet[2192]: E0114 01:11:08.985056 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:08.989493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:08.990267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:08.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:08.991910 systemd[1]: kubelet.service: Consumed 244ms CPU time, 108.4M memory peak. Jan 14 01:11:08.996728 kernel: audit: type=1131 audit(1768353068.990:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:08.999597 containerd[1591]: time="2026-01-14T01:11:08.999520974Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:09.080352 containerd[1591]: time="2026-01-14T01:11:09.080244625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:09.083786 containerd[1591]: time="2026-01-14T01:11:09.083510432Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.004389367s" Jan 14 01:11:09.083786 containerd[1591]: time="2026-01-14T01:11:09.083594453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 14 01:11:09.084559 containerd[1591]: time="2026-01-14T01:11:09.084495871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 01:11:09.694304 systemd-resolved[1276]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 14 01:11:10.750960 containerd[1591]: time="2026-01-14T01:11:10.750734035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.754723 containerd[1591]: time="2026-01-14T01:11:10.754633049Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 14 01:11:10.755977 containerd[1591]: time="2026-01-14T01:11:10.755913184Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.762104 containerd[1591]: time="2026-01-14T01:11:10.762036405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.765508 containerd[1591]: time="2026-01-14T01:11:10.765232748Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.680684058s" Jan 14 01:11:10.765508 containerd[1591]: time="2026-01-14T01:11:10.765291684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 14 01:11:10.766410 containerd[1591]: time="2026-01-14T01:11:10.766362737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 01:11:12.307680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206495466.mount: Deactivated successfully. Jan 14 01:11:12.750999 systemd-resolved[1276]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 14 01:11:12.792698 containerd[1591]: time="2026-01-14T01:11:12.792144206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:12.795787 containerd[1591]: time="2026-01-14T01:11:12.795727878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 14 01:11:12.797888 containerd[1591]: time="2026-01-14T01:11:12.797813802Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:12.802734 containerd[1591]: time="2026-01-14T01:11:12.802677447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:12.803427 containerd[1591]: time="2026-01-14T01:11:12.803391864Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.036992928s" Jan 14 01:11:12.803556 containerd[1591]: time="2026-01-14T01:11:12.803540395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 14 01:11:12.804383 containerd[1591]: time="2026-01-14T01:11:12.804333998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 01:11:13.842750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174804811.mount: Deactivated successfully. Jan 14 01:11:14.941776 containerd[1591]: time="2026-01-14T01:11:14.941596082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:14.944517 containerd[1591]: time="2026-01-14T01:11:14.944462323Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21569464" Jan 14 01:11:14.950016 containerd[1591]: time="2026-01-14T01:11:14.949950661Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:14.956811 containerd[1591]: time="2026-01-14T01:11:14.956722434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:14.959361 containerd[1591]: time="2026-01-14T01:11:14.958946380Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.154567633s" Jan 14 01:11:14.959361 containerd[1591]: time="2026-01-14T01:11:14.959001484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 14 01:11:14.959932 containerd[1591]: time="2026-01-14T01:11:14.959900585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 01:11:15.608282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397928657.mount: Deactivated successfully. Jan 14 01:11:15.620189 containerd[1591]: time="2026-01-14T01:11:15.620113827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:15.622069 containerd[1591]: time="2026-01-14T01:11:15.622018859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 01:11:15.623463 containerd[1591]: time="2026-01-14T01:11:15.623378102Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:15.628388 containerd[1591]: time="2026-01-14T01:11:15.628265722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:15.629413 containerd[1591]: time="2026-01-14T01:11:15.628969984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 668.895058ms" Jan 14 01:11:15.629413 containerd[1591]: time="2026-01-14T01:11:15.629018537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 14 01:11:15.630147 containerd[1591]: time="2026-01-14T01:11:15.629851223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 01:11:15.855935 systemd-resolved[1276]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 14 01:11:16.387505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866759713.mount: Deactivated successfully. Jan 14 01:11:18.960613 containerd[1591]: time="2026-01-14T01:11:18.960542753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:18.963410 containerd[1591]: time="2026-01-14T01:11:18.963353044Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 14 01:11:18.965292 containerd[1591]: time="2026-01-14T01:11:18.965238369Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:18.971988 containerd[1591]: time="2026-01-14T01:11:18.971928272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:18.974915 containerd[1591]: time="2026-01-14T01:11:18.974799745Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.344915151s" Jan 14 01:11:18.974915 containerd[1591]: time="2026-01-14T01:11:18.974867593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 14 01:11:19.114321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:11:19.118551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:19.417873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:19.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:19.423728 kernel: audit: type=1130 audit(1768353079.417:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:19.441268 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:19.543278 kubelet[2343]: E0114 01:11:19.543207 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:19.548466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:19.548628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:19.549293 systemd[1]: kubelet.service: Consumed 251ms CPU time, 109.4M memory peak. Jan 14 01:11:19.555066 kernel: audit: type=1131 audit(1768353079.548:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:19.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:21.855388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:21.856124 systemd[1]: kubelet.service: Consumed 251ms CPU time, 109.4M memory peak. Jan 14 01:11:21.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:21.861019 kernel: audit: type=1130 audit(1768353081.855:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:21.859985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:21.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:21.867861 kernel: audit: type=1131 audit(1768353081.855:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:21.915565 systemd[1]: Reload requested from client PID 2367 ('systemctl') (unit session-10.scope)... Jan 14 01:11:21.915591 systemd[1]: Reloading... Jan 14 01:11:22.113696 zram_generator::config[2413]: No configuration found. Jan 14 01:11:22.505387 systemd[1]: Reloading finished in 589 ms. Jan 14 01:11:22.544000 audit: BPF prog-id=61 op=LOAD Jan 14 01:11:22.549779 kernel: audit: type=1334 audit(1768353082.544:296): prog-id=61 op=LOAD Jan 14 01:11:22.549883 kernel: audit: type=1334 audit(1768353082.544:297): prog-id=62 op=LOAD Jan 14 01:11:22.544000 audit: BPF prog-id=62 op=LOAD Jan 14 01:11:22.552716 kernel: audit: type=1334 audit(1768353082.544:298): prog-id=54 op=UNLOAD Jan 14 01:11:22.544000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:11:22.544000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:11:22.557721 kernel: audit: type=1334 audit(1768353082.544:299): prog-id=55 op=UNLOAD Jan 14 01:11:22.547000 audit: BPF prog-id=63 op=LOAD Jan 14 01:11:22.560760 kernel: audit: type=1334 audit(1768353082.547:300): prog-id=63 op=LOAD Jan 14 01:11:22.547000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:11:22.547000 audit: BPF prog-id=64 op=LOAD Jan 14 01:11:22.547000 audit: BPF prog-id=65 op=LOAD Jan 14 01:11:22.547000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:11:22.547000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:11:22.548000 audit: BPF prog-id=66 op=LOAD Jan 14 01:11:22.548000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:11:22.548000 audit: BPF prog-id=67 op=LOAD Jan 14 01:11:22.548000 audit: BPF prog-id=68 op=LOAD Jan 14 01:11:22.548000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:11:22.548000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:11:22.550000 audit: BPF prog-id=69 op=LOAD Jan 14 01:11:22.550000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:11:22.552000 audit: BPF prog-id=70 op=LOAD Jan 14 01:11:22.552000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:11:22.553000 audit: BPF prog-id=71 op=LOAD Jan 14 01:11:22.553000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:11:22.553000 audit: BPF prog-id=72 op=LOAD Jan 14 01:11:22.555000 audit: BPF prog-id=73 op=LOAD Jan 14 01:11:22.564806 kernel: audit: type=1334 audit(1768353082.547:301): prog-id=41 op=UNLOAD Jan 14 01:11:22.555000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:11:22.555000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:11:22.556000 audit: BPF prog-id=74 op=LOAD Jan 14 01:11:22.556000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:11:22.562000 audit: BPF prog-id=75 op=LOAD Jan 14 01:11:22.562000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:11:22.562000 audit: BPF prog-id=76 op=LOAD Jan 14 01:11:22.562000 audit: BPF prog-id=77 op=LOAD Jan 14 01:11:22.562000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:11:22.562000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:11:22.566000 audit: BPF prog-id=78 op=LOAD Jan 14 01:11:22.566000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:11:22.566000 audit: BPF prog-id=79 op=LOAD Jan 14 01:11:22.566000 audit: BPF prog-id=80 op=LOAD Jan 14 01:11:22.566000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:11:22.566000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:11:22.612703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:11:22.612833 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:11:22.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:22.613284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:22.613361 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.6M memory peak. Jan 14 01:11:22.615610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:22.824511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:22.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:22.843469 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:11:22.920592 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:11:22.921079 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:22.923712 kubelet[2467]: I0114 01:11:22.923587 2467 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:11:23.169083 kubelet[2467]: I0114 01:11:23.169010 2467 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 01:11:23.169083 kubelet[2467]: I0114 01:11:23.169061 2467 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:11:23.171596 kubelet[2467]: I0114 01:11:23.171483 2467 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 01:11:23.171596 kubelet[2467]: I0114 01:11:23.171550 2467 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:11:23.172095 kubelet[2467]: I0114 01:11:23.172041 2467 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:11:23.196895 kubelet[2467]: I0114 01:11:23.196089 2467 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:11:23.224568 kubelet[2467]: E0114 01:11:23.224483 2467 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.154.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:11:23.246566 kubelet[2467]: I0114 01:11:23.246486 2467 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:11:23.260174 kubelet[2467]: I0114 01:11:23.260058 2467 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 01:11:23.260556 kubelet[2467]: I0114 01:11:23.260473 2467 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:11:23.262688 kubelet[2467]: I0114 01:11:23.260521 2467 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-c80f5dee3b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:11:23.262688 kubelet[2467]: I0114 01:11:23.262654 2467 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:11:23.262688 kubelet[2467]: I0114 01:11:23.262712 2467 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 01:11:23.263266 kubelet[2467]: I0114 01:11:23.262904 2467 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 01:11:23.268141 kubelet[2467]: I0114 01:11:23.268060 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:23.268463 kubelet[2467]: I0114 01:11:23.268431 2467 kubelet.go:475] "Attempting to sync node with API server" Jan 14 01:11:23.269221 kubelet[2467]: I0114 01:11:23.269076 2467 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:11:23.269340 kubelet[2467]: E0114 01:11:23.269258 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.154.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-c80f5dee3b&limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:11:23.270167 kubelet[2467]: I0114 01:11:23.270136 2467 kubelet.go:387] "Adding apiserver pod source" Jan 14 01:11:23.270818 kubelet[2467]: I0114 01:11:23.270782 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:11:23.281727 kubelet[2467]: E0114 01:11:23.281419 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.154.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:11:23.282525 kubelet[2467]: I0114 01:11:23.282477 2467 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:11:23.287168 kubelet[2467]: I0114 01:11:23.287113 2467 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:11:23.287441 kubelet[2467]: I0114 01:11:23.287419 2467 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 01:11:23.287678 kubelet[2467]: W0114 01:11:23.287639 2467 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:11:23.302079 kubelet[2467]: I0114 01:11:23.302042 2467 server.go:1262] "Started kubelet" Jan 14 01:11:23.305125 kubelet[2467]: I0114 01:11:23.305087 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:11:23.313805 kubelet[2467]: E0114 01:11:23.311343 2467 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.154.109:6443/api/v1/namespaces/default/events\": dial tcp 143.198.154.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578.0.0-p-c80f5dee3b.188a73c297ae306c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-c80f5dee3b,UID:ci-4578.0.0-p-c80f5dee3b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-c80f5dee3b,},FirstTimestamp:2026-01-14 01:11:23.301965932 +0000 UTC m=+0.449929565,LastTimestamp:2026-01-14 01:11:23.301965932 +0000 UTC m=+0.449929565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-c80f5dee3b,}" Jan 14 01:11:23.315811 kubelet[2467]: I0114 01:11:23.315735 2467 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:11:23.316272 kubelet[2467]: I0114 01:11:23.316247 2467 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 01:11:23.319766 kubelet[2467]: E0114 01:11:23.319705 2467 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" Jan 14 01:11:23.324243 kubelet[2467]: I0114 01:11:23.324173 2467 server.go:310] "Adding debug handlers to kubelet server" Jan 14 01:11:23.326000 audit[2481]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:23.326000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffff7e46270 a2=0 a3=0 items=0 ppid=2467 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:11:23.328029 kubelet[2467]: I0114 01:11:23.325035 2467 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 01:11:23.328029 kubelet[2467]: I0114 01:11:23.325129 2467 reconciler.go:29] "Reconciler: start to sync state" Jan 14 01:11:23.328029 kubelet[2467]: E0114 01:11:23.325889 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.154.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:11:23.328029 kubelet[2467]: E0114 01:11:23.326015 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.154.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c80f5dee3b?timeout=10s\": dial tcp 143.198.154.109:6443: connect: connection refused" interval="200ms" Jan 14 01:11:23.328029 kubelet[2467]: I0114 01:11:23.327815 2467 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 01:11:23.330634 kubelet[2467]: I0114 01:11:23.330188 2467 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:11:23.330634 kubelet[2467]: I0114 01:11:23.330262 2467 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 01:11:23.330634 kubelet[2467]: I0114 01:11:23.330599 2467 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:11:23.330000 audit[2482]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.330000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe6c1cfe40 a2=0 a3=0 items=0 ppid=2467 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:11:23.334497 kubelet[2467]: I0114 01:11:23.334446 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:11:23.336000 audit[2483]: NETFILTER_CFG table=mangle:44 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:23.336000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc146bfde0 a2=0 a3=0 items=0 ppid=2467 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:11:23.340000 audit[2486]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:23.340000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd46f8fa20 a2=0 a3=0 items=0 ppid=2467 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:11:23.343000 audit[2487]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:23.343000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc687ade20 a2=0 a3=0 items=0 ppid=2467 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:11:23.345456 kubelet[2467]: I0114 01:11:23.345358 2467 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:11:23.345000 audit[2488]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.345000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe30ee9ba0 a2=0 a3=0 items=0 ppid=2467 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:11:23.347081 kubelet[2467]: I0114 01:11:23.346926 2467 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:11:23.349149 kubelet[2467]: I0114 01:11:23.349057 2467 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:11:23.349391 kubelet[2467]: E0114 01:11:23.349124 2467 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:11:23.356000 audit[2493]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.356000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffc5cddd40 a2=0 a3=0 items=0 ppid=2467 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:23.372000 audit[2495]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.372000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9ad1d370 a2=0 a3=0 items=0 ppid=2467 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:23.382462 kubelet[2467]: I0114 01:11:23.382424 2467 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:11:23.382462 kubelet[2467]: I0114 01:11:23.382452 2467 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:11:23.382837 kubelet[2467]: I0114 01:11:23.382486 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:23.399000 audit[2498]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.399000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffb2deaf50 a2=0 a3=0 items=0 ppid=2467 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 14 01:11:23.400415 kubelet[2467]: I0114 01:11:23.400374 2467 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 01:11:23.400498 kubelet[2467]: I0114 01:11:23.400420 2467 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 01:11:23.400498 kubelet[2467]: I0114 01:11:23.400463 2467 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 01:11:23.400612 kubelet[2467]: E0114 01:11:23.400538 2467 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:11:23.402000 audit[2499]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.402000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed989dde0 a2=0 a3=0 items=0 ppid=2467 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:11:23.408000 audit[2500]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.408000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb08f30c0 a2=0 a3=0 items=0 ppid=2467 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:11:23.410000 audit[2502]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:23.410000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5b22d380 a2=0 a3=0 items=0 ppid=2467 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:23.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:11:23.425879 kubelet[2467]: E0114 01:11:23.412205 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.154.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:11:23.425879 kubelet[2467]: E0114 01:11:23.420071 2467 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" Jan 14 01:11:23.471305 kubelet[2467]: I0114 01:11:23.471251 2467 policy_none.go:49] "None policy: Start" Jan 14 01:11:23.471305 kubelet[2467]: I0114 01:11:23.471297 2467 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 01:11:23.471305 kubelet[2467]: I0114 01:11:23.471317 2467 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 01:11:23.476877 kubelet[2467]: I0114 01:11:23.476822 2467 policy_none.go:47] "Start" Jan 14 01:11:23.484572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:11:23.500752 kubelet[2467]: E0114 01:11:23.500598 2467 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:11:23.503151 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:11:23.510705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:11:23.520932 kubelet[2467]: E0114 01:11:23.520879 2467 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" Jan 14 01:11:23.526480 kubelet[2467]: E0114 01:11:23.526140 2467 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:11:23.526480 kubelet[2467]: I0114 01:11:23.526391 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:11:23.526480 kubelet[2467]: I0114 01:11:23.526406 2467 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:11:23.526926 kubelet[2467]: E0114 01:11:23.526888 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.154.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c80f5dee3b?timeout=10s\": dial tcp 143.198.154.109:6443: connect: connection refused" interval="400ms" Jan 14 01:11:23.527258 kubelet[2467]: I0114 01:11:23.527241 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:11:23.528907 kubelet[2467]: E0114 01:11:23.528882 2467 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:11:23.529164 kubelet[2467]: E0114 01:11:23.529100 2467 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578.0.0-p-c80f5dee3b\" not found" Jan 14 01:11:23.628178 kubelet[2467]: I0114 01:11:23.628129 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.628681 kubelet[2467]: E0114 01:11:23.628608 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.154.109:6443/api/v1/nodes\": dial tcp 143.198.154.109:6443: connect: connection refused" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.722899 systemd[1]: Created slice kubepods-burstable-pod14cdaec66b12623c3f8a8862920138d9.slice - libcontainer container kubepods-burstable-pod14cdaec66b12623c3f8a8862920138d9.slice. Jan 14 01:11:23.727694 kubelet[2467]: I0114 01:11:23.727623 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727694 kubelet[2467]: I0114 01:11:23.727702 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727936 kubelet[2467]: I0114 01:11:23.727735 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727936 kubelet[2467]: I0114 01:11:23.727763 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52275b376f96e5b22ba02aed1822c933-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-c80f5dee3b\" (UID: \"52275b376f96e5b22ba02aed1822c933\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727936 kubelet[2467]: I0114 01:11:23.727805 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727936 kubelet[2467]: I0114 01:11:23.727828 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.727936 kubelet[2467]: I0114 01:11:23.727859 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.728296 kubelet[2467]: I0114 01:11:23.727896 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.728296 kubelet[2467]: I0114 01:11:23.727926 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.738148 kubelet[2467]: E0114 01:11:23.738107 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.746206 systemd[1]: Created slice kubepods-burstable-pod286a4f44bd67fd677688653813ffbc36.slice - libcontainer container kubepods-burstable-pod286a4f44bd67fd677688653813ffbc36.slice. Jan 14 01:11:23.755738 kubelet[2467]: E0114 01:11:23.755701 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.762015 systemd[1]: Created slice kubepods-burstable-pod52275b376f96e5b22ba02aed1822c933.slice - libcontainer container kubepods-burstable-pod52275b376f96e5b22ba02aed1822c933.slice. Jan 14 01:11:23.765769 kubelet[2467]: E0114 01:11:23.765655 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.830110 kubelet[2467]: I0114 01:11:23.830073 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.830882 kubelet[2467]: E0114 01:11:23.830829 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.154.109:6443/api/v1/nodes\": dial tcp 143.198.154.109:6443: connect: connection refused" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:23.927782 kubelet[2467]: E0114 01:11:23.927660 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.154.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c80f5dee3b?timeout=10s\": dial tcp 143.198.154.109:6443: connect: connection refused" interval="800ms" Jan 14 01:11:24.041992 kubelet[2467]: E0114 01:11:24.041859 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:24.043615 containerd[1591]: time="2026-01-14T01:11:24.043538903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-c80f5dee3b,Uid:14cdaec66b12623c3f8a8862920138d9,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:24.061234 kubelet[2467]: E0114 01:11:24.060772 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:24.061939 containerd[1591]: time="2026-01-14T01:11:24.061879713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-c80f5dee3b,Uid:286a4f44bd67fd677688653813ffbc36,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:24.070218 kubelet[2467]: E0114 01:11:24.070180 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:24.071134 containerd[1591]: time="2026-01-14T01:11:24.071070721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-c80f5dee3b,Uid:52275b376f96e5b22ba02aed1822c933,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:24.232566 kubelet[2467]: I0114 01:11:24.232522 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:24.233373 kubelet[2467]: E0114 01:11:24.233317 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.154.109:6443/api/v1/nodes\": dial tcp 143.198.154.109:6443: connect: connection refused" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:24.275909 kubelet[2467]: E0114 01:11:24.275842 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.154.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:11:24.378941 kubelet[2467]: E0114 01:11:24.378863 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.154.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:11:24.414376 kubelet[2467]: E0114 01:11:24.414126 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.154.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-c80f5dee3b&limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:11:24.729543 kubelet[2467]: E0114 01:11:24.729048 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.154.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-c80f5dee3b?timeout=10s\": dial tcp 143.198.154.109:6443: connect: connection refused" interval="1.6s" Jan 14 01:11:24.738056 kubelet[2467]: E0114 01:11:24.737999 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.154.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:11:24.840170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987039131.mount: Deactivated successfully. Jan 14 01:11:24.852382 containerd[1591]: time="2026-01-14T01:11:24.852193529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:24.860093 containerd[1591]: time="2026-01-14T01:11:24.860009330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:11:24.862997 containerd[1591]: time="2026-01-14T01:11:24.862798185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:24.866763 containerd[1591]: time="2026-01-14T01:11:24.866686676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:11:24.869598 containerd[1591]: time="2026-01-14T01:11:24.869383794Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:24.874014 containerd[1591]: time="2026-01-14T01:11:24.873902006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:24.875384 containerd[1591]: time="2026-01-14T01:11:24.875229811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:11:24.876561 containerd[1591]: time="2026-01-14T01:11:24.876510536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:24.878166 containerd[1591]: time="2026-01-14T01:11:24.878094933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 811.874823ms" Jan 14 01:11:24.883351 containerd[1591]: time="2026-01-14T01:11:24.882963295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 829.598936ms" Jan 14 01:11:24.895131 containerd[1591]: time="2026-01-14T01:11:24.895071235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 820.958338ms" Jan 14 01:11:25.038468 kubelet[2467]: I0114 01:11:25.038281 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:25.040176 kubelet[2467]: E0114 01:11:25.038694 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.154.109:6443/api/v1/nodes\": dial tcp 143.198.154.109:6443: connect: connection refused" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:25.051818 containerd[1591]: time="2026-01-14T01:11:25.051638432Z" level=info msg="connecting to shim 7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54" address="unix:///run/containerd/s/e3d73b281b96c681cdec921d1e1b0414663272de87f934a5cb9e429d580e2f8f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:25.058417 containerd[1591]: time="2026-01-14T01:11:25.058285534Z" level=info msg="connecting to shim 2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7" address="unix:///run/containerd/s/2cfc58dd7619227d9c06e979e1b483ca7c4bf7b71122e935e34bf018c24836bc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:25.062553 containerd[1591]: time="2026-01-14T01:11:25.062392275Z" level=info msg="connecting to shim f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6" address="unix:///run/containerd/s/32a4df7a99fb1819fc125b0ef2488fb82bacf07559277a8d14e1f40f908735c9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:25.190210 systemd[1]: Started cri-containerd-2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7.scope - libcontainer container 2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7. Jan 14 01:11:25.195346 systemd[1]: Started cri-containerd-f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6.scope - libcontainer container f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6. Jan 14 01:11:25.212014 systemd[1]: Started cri-containerd-7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54.scope - libcontainer container 7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54. Jan 14 01:11:25.246039 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 14 01:11:25.246183 kernel: audit: type=1334 audit(1768353085.243:350): prog-id=81 op=LOAD Jan 14 01:11:25.243000 audit: BPF prog-id=81 op=LOAD Jan 14 01:11:25.248000 audit: BPF prog-id=82 op=LOAD Jan 14 01:11:25.253541 kernel: audit: type=1334 audit(1768353085.248:351): prog-id=82 op=LOAD Jan 14 01:11:25.254425 kernel: audit: type=1300 audit(1768353085.248:351): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.248000 audit[2566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.268357 kernel: audit: type=1327 audit(1768353085.248:351): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.249000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:11:25.249000 audit[2566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.273040 kernel: audit: type=1334 audit(1768353085.249:352): prog-id=82 op=UNLOAD Jan 14 01:11:25.273148 kernel: audit: type=1300 audit(1768353085.249:352): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.249000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.279526 kernel: audit: type=1327 audit(1768353085.249:352): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit: BPF prog-id=83 op=LOAD Jan 14 01:11:25.289740 kernel: audit: type=1334 audit(1768353085.256:353): prog-id=83 op=LOAD Jan 14 01:11:25.289903 kernel: audit: type=1300 audit(1768353085.256:353): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit[2566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit: BPF prog-id=84 op=LOAD Jan 14 01:11:25.304755 kernel: audit: type=1327 audit(1768353085.256:353): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit[2566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:11:25.256000 audit[2566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:11:25.256000 audit[2566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.256000 audit: BPF prog-id=85 op=LOAD Jan 14 01:11:25.256000 audit[2566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234373864393362323836313761613739633137333363393936383338 Jan 14 01:11:25.258000 audit: BPF prog-id=86 op=LOAD Jan 14 01:11:25.260000 audit: BPF prog-id=87 op=LOAD Jan 14 01:11:25.260000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.260000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:11:25.260000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.261000 audit: BPF prog-id=88 op=LOAD Jan 14 01:11:25.267000 audit: BPF prog-id=89 op=LOAD Jan 14 01:11:25.267000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.267000 audit: BPF prog-id=90 op=LOAD Jan 14 01:11:25.267000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.267000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:11:25.267000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.267000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:11:25.267000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.267000 audit: BPF prog-id=91 op=LOAD Jan 14 01:11:25.267000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2529 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633346131346339366139613830386132623763633634323761336261 Jan 14 01:11:25.268000 audit: BPF prog-id=92 op=LOAD Jan 14 01:11:25.268000 audit[2556]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.268000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:11:25.268000 audit[2556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.268000 audit: BPF prog-id=93 op=LOAD Jan 14 01:11:25.268000 audit[2556]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.286000 audit: BPF prog-id=94 op=LOAD Jan 14 01:11:25.286000 audit[2556]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.303000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:11:25.303000 audit[2556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.303000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:11:25.303000 audit[2556]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.304000 audit: BPF prog-id=95 op=LOAD Jan 14 01:11:25.304000 audit[2556]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2533 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.304000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731363461626332633536653661343238323132383365366239616365 Jan 14 01:11:25.348560 kubelet[2467]: E0114 01:11:25.347779 2467 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.154.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:11:25.381873 containerd[1591]: time="2026-01-14T01:11:25.381817088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-c80f5dee3b,Uid:286a4f44bd67fd677688653813ffbc36,Namespace:kube-system,Attempt:0,} returns sandbox id \"2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7\"" Jan 14 01:11:25.384164 kubelet[2467]: E0114 01:11:25.384126 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:25.391109 containerd[1591]: time="2026-01-14T01:11:25.390933481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-c80f5dee3b,Uid:14cdaec66b12623c3f8a8862920138d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6\"" Jan 14 01:11:25.392393 kubelet[2467]: E0114 01:11:25.392285 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:25.396002 containerd[1591]: time="2026-01-14T01:11:25.395783114Z" level=info msg="CreateContainer within sandbox \"2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:11:25.400437 containerd[1591]: time="2026-01-14T01:11:25.399986451Z" level=info msg="CreateContainer within sandbox \"f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:11:25.417764 containerd[1591]: time="2026-01-14T01:11:25.417707906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-c80f5dee3b,Uid:52275b376f96e5b22ba02aed1822c933,Namespace:kube-system,Attempt:0,} returns sandbox id \"7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54\"" Jan 14 01:11:25.418973 kubelet[2467]: E0114 01:11:25.418939 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:25.424116 containerd[1591]: time="2026-01-14T01:11:25.424070460Z" level=info msg="Container 1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:25.425911 containerd[1591]: time="2026-01-14T01:11:25.425867665Z" level=info msg="CreateContainer within sandbox \"7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:11:25.429617 containerd[1591]: time="2026-01-14T01:11:25.429496690Z" level=info msg="Container 406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:25.459985 containerd[1591]: time="2026-01-14T01:11:25.459912417Z" level=info msg="CreateContainer within sandbox \"f34a14c96a9a808a2b7cc6427a3baaf8396f8b52b2a71414b959b314f3d429b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9\"" Jan 14 01:11:25.461011 containerd[1591]: time="2026-01-14T01:11:25.460921864Z" level=info msg="StartContainer for \"406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9\"" Jan 14 01:11:25.462339 containerd[1591]: time="2026-01-14T01:11:25.462271789Z" level=info msg="CreateContainer within sandbox \"2478d93b28617aa79c1733c99683869966b1a931856ba2849cf40e38833511a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818\"" Jan 14 01:11:25.464715 containerd[1591]: time="2026-01-14T01:11:25.464219041Z" level=info msg="StartContainer for \"1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818\"" Jan 14 01:11:25.465205 containerd[1591]: time="2026-01-14T01:11:25.465159905Z" level=info msg="connecting to shim 406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9" address="unix:///run/containerd/s/32a4df7a99fb1819fc125b0ef2488fb82bacf07559277a8d14e1f40f908735c9" protocol=ttrpc version=3 Jan 14 01:11:25.467035 containerd[1591]: time="2026-01-14T01:11:25.467004734Z" level=info msg="Container c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:25.467630 containerd[1591]: time="2026-01-14T01:11:25.467591971Z" level=info msg="connecting to shim 1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818" address="unix:///run/containerd/s/2cfc58dd7619227d9c06e979e1b483ca7c4bf7b71122e935e34bf018c24836bc" protocol=ttrpc version=3 Jan 14 01:11:25.487165 containerd[1591]: time="2026-01-14T01:11:25.487097602Z" level=info msg="CreateContainer within sandbox \"7164abc2c56e6a42821283e6b9ace63ff218448a0ac07e26d24ff65f34c0eb54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425\"" Jan 14 01:11:25.489698 containerd[1591]: time="2026-01-14T01:11:25.489613806Z" level=info msg="StartContainer for \"c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425\"" Jan 14 01:11:25.495869 containerd[1591]: time="2026-01-14T01:11:25.495581407Z" level=info msg="connecting to shim c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425" address="unix:///run/containerd/s/e3d73b281b96c681cdec921d1e1b0414663272de87f934a5cb9e429d580e2f8f" protocol=ttrpc version=3 Jan 14 01:11:25.509067 systemd[1]: Started cri-containerd-406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9.scope - libcontainer container 406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9. Jan 14 01:11:25.521237 systemd[1]: Started cri-containerd-1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818.scope - libcontainer container 1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818. Jan 14 01:11:25.542996 systemd[1]: Started cri-containerd-c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425.scope - libcontainer container c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425. Jan 14 01:11:25.559000 audit: BPF prog-id=96 op=LOAD Jan 14 01:11:25.561000 audit: BPF prog-id=97 op=LOAD Jan 14 01:11:25.561000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.561000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:11:25.561000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.562000 audit: BPF prog-id=98 op=LOAD Jan 14 01:11:25.562000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.562000 audit: BPF prog-id=99 op=LOAD Jan 14 01:11:25.562000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.562000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:11:25.562000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.562000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:11:25.562000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.562000 audit: BPF prog-id=100 op=LOAD Jan 14 01:11:25.562000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2529 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430366530363436623265343435343030663035366436376666666336 Jan 14 01:11:25.564000 audit: BPF prog-id=101 op=LOAD Jan 14 01:11:25.567000 audit: BPF prog-id=102 op=LOAD Jan 14 01:11:25.567000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.567000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:11:25.567000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.567000 audit: BPF prog-id=103 op=LOAD Jan 14 01:11:25.567000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.568000 audit: BPF prog-id=104 op=LOAD Jan 14 01:11:25.568000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.568000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:11:25.568000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.568000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:11:25.568000 audit[2645]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.568000 audit: BPF prog-id=105 op=LOAD Jan 14 01:11:25.568000 audit[2645]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2534 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130303163653932623538303433323939353839393465303435313138 Jan 14 01:11:25.599000 audit: BPF prog-id=106 op=LOAD Jan 14 01:11:25.600000 audit: BPF prog-id=107 op=LOAD Jan 14 01:11:25.600000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.601000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:11:25.601000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.601000 audit: BPF prog-id=108 op=LOAD Jan 14 01:11:25.601000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.602000 audit: BPF prog-id=109 op=LOAD Jan 14 01:11:25.602000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.602000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:11:25.602000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.602000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:11:25.602000 audit[2667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.603000 audit: BPF prog-id=110 op=LOAD Jan 14 01:11:25.603000 audit[2667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2533 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:25.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331616135656338343634393763373730316261383637613666313435 Jan 14 01:11:25.667990 containerd[1591]: time="2026-01-14T01:11:25.667928362Z" level=info msg="StartContainer for \"406e0646b2e445400f056d67fffc6992d5110a4a997746c9870d337872b3e6a9\" returns successfully" Jan 14 01:11:25.673277 containerd[1591]: time="2026-01-14T01:11:25.673193543Z" level=info msg="StartContainer for \"1001ce92b5804329958994e0451182aaf246c174ced6460c51c9f7499b2a8818\" returns successfully" Jan 14 01:11:25.704690 containerd[1591]: time="2026-01-14T01:11:25.704403085Z" level=info msg="StartContainer for \"c1aa5ec846497c7701ba867a6f1453c5ea69391240b20460914513fee1fa8425\" returns successfully" Jan 14 01:11:26.057284 kubelet[2467]: E0114 01:11:26.057216 2467 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.154.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.154.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:11:26.436980 kubelet[2467]: E0114 01:11:26.436714 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:26.437865 kubelet[2467]: E0114 01:11:26.437742 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:26.442726 kubelet[2467]: E0114 01:11:26.440758 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:26.442726 kubelet[2467]: E0114 01:11:26.440970 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:26.447871 kubelet[2467]: E0114 01:11:26.447834 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:26.448273 kubelet[2467]: E0114 01:11:26.448246 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:26.640811 kubelet[2467]: I0114 01:11:26.640769 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.450282 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.450453 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.453216 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.453404 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.453830 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:27.450563 kubelet[2467]: E0114 01:11:27.453962 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:28.452769 kubelet[2467]: E0114 01:11:28.452725 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.453929 kubelet[2467]: E0114 01:11:28.453881 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.454097 kubelet[2467]: E0114 01:11:28.453965 2467 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-c80f5dee3b\" not found" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.454605 kubelet[2467]: E0114 01:11:28.454177 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:28.454605 kubelet[2467]: E0114 01:11:28.454300 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:28.454605 kubelet[2467]: E0114 01:11:28.454329 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:28.862820 kubelet[2467]: I0114 01:11:28.861202 2467 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.925104 kubelet[2467]: I0114 01:11:28.925046 2467 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.965812 kubelet[2467]: E0114 01:11:28.965768 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 14 01:11:28.974716 kubelet[2467]: E0114 01:11:28.972902 2467 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.974987 kubelet[2467]: I0114 01:11:28.974811 2467 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.980622 kubelet[2467]: E0114 01:11:28.980551 2467 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.980622 kubelet[2467]: I0114 01:11:28.980586 2467 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:28.985500 kubelet[2467]: E0114 01:11:28.985446 2467 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-c80f5dee3b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:29.282859 kubelet[2467]: I0114 01:11:29.282766 2467 apiserver.go:52] "Watching apiserver" Jan 14 01:11:29.325888 kubelet[2467]: I0114 01:11:29.325820 2467 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 01:11:29.453163 kubelet[2467]: I0114 01:11:29.453095 2467 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:29.456340 kubelet[2467]: E0114 01:11:29.456271 2467 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:29.456568 kubelet[2467]: E0114 01:11:29.456519 2467 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:29.782949 update_engine[1570]: I20260114 01:11:29.782840 1570 update_attempter.cc:509] Updating boot flags... Jan 14 01:11:31.342160 systemd[1]: Reload requested from client PID 2763 ('systemctl') (unit session-10.scope)... Jan 14 01:11:31.342700 systemd[1]: Reloading... Jan 14 01:11:31.523706 zram_generator::config[2812]: No configuration found. Jan 14 01:11:31.910922 systemd[1]: Reloading finished in 567 ms. Jan 14 01:11:31.958121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:31.975430 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:11:31.975817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:31.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:31.977448 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 14 01:11:31.977552 kernel: audit: type=1131 audit(1768353091.975:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:31.977698 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 123.6M memory peak. Jan 14 01:11:31.985000 audit: BPF prog-id=111 op=LOAD Jan 14 01:11:31.985922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:31.988712 kernel: audit: type=1334 audit(1768353091.985:399): prog-id=111 op=LOAD Jan 14 01:11:31.991910 kernel: audit: type=1334 audit(1768353091.985:400): prog-id=63 op=UNLOAD Jan 14 01:11:31.985000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:11:31.996720 kernel: audit: type=1334 audit(1768353091.985:401): prog-id=112 op=LOAD Jan 14 01:11:31.985000 audit: BPF prog-id=112 op=LOAD Jan 14 01:11:31.985000 audit: BPF prog-id=113 op=LOAD Jan 14 01:11:32.000696 kernel: audit: type=1334 audit(1768353091.985:402): prog-id=113 op=LOAD Jan 14 01:11:31.985000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:11:32.004701 kernel: audit: type=1334 audit(1768353091.985:403): prog-id=64 op=UNLOAD Jan 14 01:11:32.007933 kernel: audit: type=1334 audit(1768353091.985:404): prog-id=65 op=UNLOAD Jan 14 01:11:31.985000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:11:31.986000 audit: BPF prog-id=114 op=LOAD Jan 14 01:11:31.986000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:11:32.011152 kernel: audit: type=1334 audit(1768353091.986:405): prog-id=114 op=LOAD Jan 14 01:11:32.011274 kernel: audit: type=1334 audit(1768353091.986:406): prog-id=78 op=UNLOAD Jan 14 01:11:31.986000 audit: BPF prog-id=115 op=LOAD Jan 14 01:11:32.013081 kernel: audit: type=1334 audit(1768353091.986:407): prog-id=115 op=LOAD Jan 14 01:11:31.986000 audit: BPF prog-id=116 op=LOAD Jan 14 01:11:31.986000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:11:31.986000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:11:31.989000 audit: BPF prog-id=117 op=LOAD Jan 14 01:11:31.989000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:11:31.989000 audit: BPF prog-id=118 op=LOAD Jan 14 01:11:31.989000 audit: BPF prog-id=119 op=LOAD Jan 14 01:11:31.989000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:11:31.989000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:11:31.989000 audit: BPF prog-id=120 op=LOAD Jan 14 01:11:31.989000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:11:31.991000 audit: BPF prog-id=121 op=LOAD Jan 14 01:11:31.991000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:11:31.993000 audit: BPF prog-id=122 op=LOAD Jan 14 01:11:31.993000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:11:31.995000 audit: BPF prog-id=123 op=LOAD Jan 14 01:11:31.995000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:11:31.995000 audit: BPF prog-id=124 op=LOAD Jan 14 01:11:31.995000 audit: BPF prog-id=125 op=LOAD Jan 14 01:11:31.995000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:11:31.995000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:11:31.997000 audit: BPF prog-id=126 op=LOAD Jan 14 01:11:31.997000 audit: BPF prog-id=127 op=LOAD Jan 14 01:11:31.997000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:11:31.997000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:11:31.997000 audit: BPF prog-id=128 op=LOAD Jan 14 01:11:31.997000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:11:31.997000 audit: BPF prog-id=129 op=LOAD Jan 14 01:11:31.997000 audit: BPF prog-id=130 op=LOAD Jan 14 01:11:31.997000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:11:31.997000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:11:32.214705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:32.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:32.228342 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:11:32.344714 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:11:32.344714 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:32.346717 kubelet[2860]: I0114 01:11:32.345610 2860 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:11:32.360558 kubelet[2860]: I0114 01:11:32.360498 2860 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 01:11:32.360558 kubelet[2860]: I0114 01:11:32.360536 2860 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:11:32.362815 kubelet[2860]: I0114 01:11:32.362731 2860 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 01:11:32.362815 kubelet[2860]: I0114 01:11:32.362827 2860 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:11:32.363535 kubelet[2860]: I0114 01:11:32.363191 2860 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:11:32.366517 kubelet[2860]: I0114 01:11:32.365511 2860 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:11:32.373170 kubelet[2860]: I0114 01:11:32.372479 2860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:11:32.382549 kubelet[2860]: I0114 01:11:32.380979 2860 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:11:32.388465 kubelet[2860]: I0114 01:11:32.387853 2860 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 01:11:32.391208 kubelet[2860]: I0114 01:11:32.391131 2860 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:11:32.392163 kubelet[2860]: I0114 01:11:32.391201 2860 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-c80f5dee3b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:11:32.392163 kubelet[2860]: I0114 01:11:32.391554 2860 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:11:32.392163 kubelet[2860]: I0114 01:11:32.391570 2860 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 01:11:32.392163 kubelet[2860]: I0114 01:11:32.391613 2860 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 01:11:32.397739 kubelet[2860]: I0114 01:11:32.397073 2860 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:32.403694 kubelet[2860]: I0114 01:11:32.403360 2860 kubelet.go:475] "Attempting to sync node with API server" Jan 14 01:11:32.403997 kubelet[2860]: I0114 01:11:32.403931 2860 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:11:32.406644 kubelet[2860]: I0114 01:11:32.406038 2860 kubelet.go:387] "Adding apiserver pod source" Jan 14 01:11:32.406804 kubelet[2860]: I0114 01:11:32.406686 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:11:32.429045 kubelet[2860]: I0114 01:11:32.428843 2860 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:11:32.430702 kubelet[2860]: I0114 01:11:32.429467 2860 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:11:32.430702 kubelet[2860]: I0114 01:11:32.429506 2860 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 01:11:32.457896 kubelet[2860]: I0114 01:11:32.456376 2860 server.go:1262] "Started kubelet" Jan 14 01:11:32.460887 kubelet[2860]: I0114 01:11:32.460857 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:11:32.475655 kubelet[2860]: I0114 01:11:32.475430 2860 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:11:32.483200 kubelet[2860]: I0114 01:11:32.483019 2860 server.go:310] "Adding debug handlers to kubelet server" Jan 14 01:11:32.497101 kubelet[2860]: I0114 01:11:32.495007 2860 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:11:32.497101 kubelet[2860]: I0114 01:11:32.495081 2860 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 01:11:32.497101 kubelet[2860]: I0114 01:11:32.495264 2860 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:11:32.497101 kubelet[2860]: I0114 01:11:32.495607 2860 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:11:32.499521 kubelet[2860]: I0114 01:11:32.499423 2860 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 01:11:32.502269 kubelet[2860]: I0114 01:11:32.502005 2860 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 01:11:32.503759 kubelet[2860]: I0114 01:11:32.503725 2860 reconciler.go:29] "Reconciler: start to sync state" Jan 14 01:11:32.509899 kubelet[2860]: I0114 01:11:32.508677 2860 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:11:32.510310 kubelet[2860]: I0114 01:11:32.510271 2860 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:11:32.514006 kubelet[2860]: I0114 01:11:32.512278 2860 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 01:11:32.517489 kubelet[2860]: I0114 01:11:32.516845 2860 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 01:11:32.517489 kubelet[2860]: I0114 01:11:32.516881 2860 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 01:11:32.517721 kubelet[2860]: I0114 01:11:32.517527 2860 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 01:11:32.517721 kubelet[2860]: E0114 01:11:32.517618 2860 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:11:32.520569 kubelet[2860]: I0114 01:11:32.520532 2860 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:11:32.531558 kubelet[2860]: E0114 01:11:32.531454 2860 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:11:32.621463 kubelet[2860]: E0114 01:11:32.618115 2860 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:11:32.631706 kubelet[2860]: I0114 01:11:32.631583 2860 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:11:32.631706 kubelet[2860]: I0114 01:11:32.631604 2860 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:11:32.631706 kubelet[2860]: I0114 01:11:32.631629 2860 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:32.634215 kubelet[2860]: I0114 01:11:32.633659 2860 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:11:32.634215 kubelet[2860]: I0114 01:11:32.634074 2860 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:11:32.634215 kubelet[2860]: I0114 01:11:32.634112 2860 policy_none.go:49] "None policy: Start" Jan 14 01:11:32.634215 kubelet[2860]: I0114 01:11:32.634129 2860 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 01:11:32.634215 kubelet[2860]: I0114 01:11:32.634148 2860 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 01:11:32.635154 kubelet[2860]: I0114 01:11:32.634533 2860 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 01:11:32.635154 kubelet[2860]: I0114 01:11:32.634547 2860 policy_none.go:47] "Start" Jan 14 01:11:32.644605 kubelet[2860]: E0114 01:11:32.644560 2860 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:11:32.645245 kubelet[2860]: I0114 01:11:32.645162 2860 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:11:32.646416 kubelet[2860]: I0114 01:11:32.645183 2860 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:11:32.648081 kubelet[2860]: I0114 01:11:32.648064 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:11:32.656712 kubelet[2860]: E0114 01:11:32.656407 2860 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:11:32.758648 kubelet[2860]: I0114 01:11:32.758500 2860 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.772724 kubelet[2860]: I0114 01:11:32.771911 2860 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.772724 kubelet[2860]: I0114 01:11:32.772008 2860 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.823644 kubelet[2860]: I0114 01:11:32.823565 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.823939 kubelet[2860]: I0114 01:11:32.823920 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.843531 kubelet[2860]: I0114 01:11:32.843332 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:32.849454 kubelet[2860]: I0114 01:11:32.848454 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:32.852561 kubelet[2860]: I0114 01:11:32.852508 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:32.864629 kubelet[2860]: I0114 01:11:32.864553 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:33.013437 kubelet[2860]: I0114 01:11:33.013194 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.013930 kubelet[2860]: I0114 01:11:33.013656 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.013930 kubelet[2860]: I0114 01:11:33.013748 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.013930 kubelet[2860]: I0114 01:11:33.013778 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.013930 kubelet[2860]: I0114 01:11:33.013880 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14cdaec66b12623c3f8a8862920138d9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-c80f5dee3b\" (UID: \"14cdaec66b12623c3f8a8862920138d9\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.014829 kubelet[2860]: I0114 01:11:33.013911 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.014829 kubelet[2860]: I0114 01:11:33.014440 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.014829 kubelet[2860]: I0114 01:11:33.014669 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/286a4f44bd67fd677688653813ffbc36-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-c80f5dee3b\" (UID: \"286a4f44bd67fd677688653813ffbc36\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.014829 kubelet[2860]: I0114 01:11:33.014702 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52275b376f96e5b22ba02aed1822c933-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-c80f5dee3b\" (UID: \"52275b376f96e5b22ba02aed1822c933\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.148631 kubelet[2860]: E0114 01:11:33.148575 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:33.154392 kubelet[2860]: E0114 01:11:33.154052 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:33.165626 kubelet[2860]: E0114 01:11:33.165571 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:33.421513 kubelet[2860]: I0114 01:11:33.420602 2860 apiserver.go:52] "Watching apiserver" Jan 14 01:11:33.504717 kubelet[2860]: I0114 01:11:33.502926 2860 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 01:11:33.566831 kubelet[2860]: I0114 01:11:33.566721 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" podStartSLOduration=1.56669693 podStartE2EDuration="1.56669693s" podCreationTimestamp="2026-01-14 01:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:33.55048022 +0000 UTC m=+1.299481262" watchObservedRunningTime="2026-01-14 01:11:33.56669693 +0000 UTC m=+1.315697972" Jan 14 01:11:33.581699 kubelet[2860]: I0114 01:11:33.581337 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-c80f5dee3b" podStartSLOduration=1.581311506 podStartE2EDuration="1.581311506s" podCreationTimestamp="2026-01-14 01:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:33.566687883 +0000 UTC m=+1.315688927" watchObservedRunningTime="2026-01-14 01:11:33.581311506 +0000 UTC m=+1.330312550" Jan 14 01:11:33.602095 kubelet[2860]: I0114 01:11:33.602050 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.605989 kubelet[2860]: E0114 01:11:33.604614 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:33.605989 kubelet[2860]: E0114 01:11:33.605842 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:33.606742 kubelet[2860]: I0114 01:11:33.606555 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578.0.0-p-c80f5dee3b" podStartSLOduration=1.6065357919999999 podStartE2EDuration="1.606535792s" podCreationTimestamp="2026-01-14 01:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:33.584814611 +0000 UTC m=+1.333815656" watchObservedRunningTime="2026-01-14 01:11:33.606535792 +0000 UTC m=+1.355536832" Jan 14 01:11:33.617285 kubelet[2860]: I0114 01:11:33.617180 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:33.617285 kubelet[2860]: E0114 01:11:33.617267 2860 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-c80f5dee3b\" already exists" pod="kube-system/kube-scheduler-ci-4578.0.0-p-c80f5dee3b" Jan 14 01:11:33.617748 kubelet[2860]: E0114 01:11:33.617492 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:34.607078 kubelet[2860]: E0114 01:11:34.607040 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:34.607873 kubelet[2860]: E0114 01:11:34.607585 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:36.744333 kubelet[2860]: E0114 01:11:36.743639 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:36.825331 kubelet[2860]: I0114 01:11:36.825066 2860 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:11:36.825980 containerd[1591]: time="2026-01-14T01:11:36.825913684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:11:36.827067 kubelet[2860]: I0114 01:11:36.826776 2860 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:11:37.403716 kubelet[2860]: E0114 01:11:37.403543 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:37.616025 kubelet[2860]: E0114 01:11:37.615864 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:37.617438 kubelet[2860]: E0114 01:11:37.617398 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:37.747697 systemd[1]: Created slice kubepods-besteffort-pod9288e19c_25e7_44a6_9e1c_5b32bb318961.slice - libcontainer container kubepods-besteffort-pod9288e19c_25e7_44a6_9e1c_5b32bb318961.slice. Jan 14 01:11:37.750499 kubelet[2860]: I0114 01:11:37.749775 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9288e19c-25e7-44a6-9e1c-5b32bb318961-xtables-lock\") pod \"kube-proxy-z8t82\" (UID: \"9288e19c-25e7-44a6-9e1c-5b32bb318961\") " pod="kube-system/kube-proxy-z8t82" Jan 14 01:11:37.750499 kubelet[2860]: I0114 01:11:37.749815 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9288e19c-25e7-44a6-9e1c-5b32bb318961-lib-modules\") pod \"kube-proxy-z8t82\" (UID: \"9288e19c-25e7-44a6-9e1c-5b32bb318961\") " pod="kube-system/kube-proxy-z8t82" Jan 14 01:11:37.750499 kubelet[2860]: I0114 01:11:37.749846 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4z9q\" (UniqueName: \"kubernetes.io/projected/9288e19c-25e7-44a6-9e1c-5b32bb318961-kube-api-access-b4z9q\") pod \"kube-proxy-z8t82\" (UID: \"9288e19c-25e7-44a6-9e1c-5b32bb318961\") " pod="kube-system/kube-proxy-z8t82" Jan 14 01:11:37.750499 kubelet[2860]: I0114 01:11:37.749872 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9288e19c-25e7-44a6-9e1c-5b32bb318961-kube-proxy\") pod \"kube-proxy-z8t82\" (UID: \"9288e19c-25e7-44a6-9e1c-5b32bb318961\") " pod="kube-system/kube-proxy-z8t82" Jan 14 01:11:37.983265 systemd[1]: Created slice kubepods-besteffort-podffd182ab_a098_4191_9980_0113ffbde8a5.slice - libcontainer container kubepods-besteffort-podffd182ab_a098_4191_9980_0113ffbde8a5.slice. Jan 14 01:11:38.053027 kubelet[2860]: I0114 01:11:38.052849 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5zg\" (UniqueName: \"kubernetes.io/projected/ffd182ab-a098-4191-9980-0113ffbde8a5-kube-api-access-lq5zg\") pod \"tigera-operator-65cdcdfd6d-mslgb\" (UID: \"ffd182ab-a098-4191-9980-0113ffbde8a5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mslgb" Jan 14 01:11:38.053740 kubelet[2860]: I0114 01:11:38.053622 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffd182ab-a098-4191-9980-0113ffbde8a5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-mslgb\" (UID: \"ffd182ab-a098-4191-9980-0113ffbde8a5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mslgb" Jan 14 01:11:38.060782 kubelet[2860]: E0114 01:11:38.060728 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:38.062098 containerd[1591]: time="2026-01-14T01:11:38.062040524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8t82,Uid:9288e19c-25e7-44a6-9e1c-5b32bb318961,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:38.095912 containerd[1591]: time="2026-01-14T01:11:38.095845610Z" level=info msg="connecting to shim 38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904" address="unix:///run/containerd/s/5a62712574983530d0c838854f5e47a214b13efbb5070d957ac2d2d085435a5a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:38.141053 systemd[1]: Started cri-containerd-38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904.scope - libcontainer container 38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904. Jan 14 01:11:38.157000 audit: BPF prog-id=131 op=LOAD Jan 14 01:11:38.159094 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:11:38.159192 kernel: audit: type=1334 audit(1768353098.157:440): prog-id=131 op=LOAD Jan 14 01:11:38.162000 audit: BPF prog-id=132 op=LOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.166779 kernel: audit: type=1334 audit(1768353098.162:441): prog-id=132 op=LOAD Jan 14 01:11:38.166930 kernel: audit: type=1300 audit(1768353098.162:441): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.173185 kernel: audit: type=1327 audit(1768353098.162:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=132 op=UNLOAD Jan 14 01:11:38.179930 kernel: audit: type=1334 audit(1768353098.162:442): prog-id=132 op=UNLOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.188778 kernel: audit: type=1300 audit(1768353098.162:442): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=133 op=LOAD Jan 14 01:11:38.195869 kernel: audit: type=1327 audit(1768353098.162:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.195954 kernel: audit: type=1334 audit(1768353098.162:443): prog-id=133 op=LOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.198957 kernel: audit: type=1300 audit(1768353098.162:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.204645 kernel: audit: type=1327 audit(1768353098.162:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=134 op=LOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.162000 audit: BPF prog-id=135 op=LOAD Jan 14 01:11:38.162000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2918 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338323235643261623239633934613862626636306435646562636662 Jan 14 01:11:38.215808 containerd[1591]: time="2026-01-14T01:11:38.215693956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8t82,Uid:9288e19c-25e7-44a6-9e1c-5b32bb318961,Namespace:kube-system,Attempt:0,} returns sandbox id \"38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904\"" Jan 14 01:11:38.216997 kubelet[2860]: E0114 01:11:38.216961 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:38.224616 containerd[1591]: time="2026-01-14T01:11:38.224426147Z" level=info msg="CreateContainer within sandbox \"38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:11:38.249135 containerd[1591]: time="2026-01-14T01:11:38.249092552Z" level=info msg="Container 9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:38.260894 containerd[1591]: time="2026-01-14T01:11:38.260840529Z" level=info msg="CreateContainer within sandbox \"38225d2ab29c94a8bbf60d5debcfb80d98a2233ab28281a3691f313d9e6cd904\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984\"" Jan 14 01:11:38.262748 containerd[1591]: time="2026-01-14T01:11:38.262602353Z" level=info msg="StartContainer for \"9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984\"" Jan 14 01:11:38.265748 containerd[1591]: time="2026-01-14T01:11:38.265697323Z" level=info msg="connecting to shim 9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984" address="unix:///run/containerd/s/5a62712574983530d0c838854f5e47a214b13efbb5070d957ac2d2d085435a5a" protocol=ttrpc version=3 Jan 14 01:11:38.293051 containerd[1591]: time="2026-01-14T01:11:38.292523934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mslgb,Uid:ffd182ab-a098-4191-9980-0113ffbde8a5,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:11:38.296075 systemd[1]: Started cri-containerd-9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984.scope - libcontainer container 9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984. Jan 14 01:11:38.322770 containerd[1591]: time="2026-01-14T01:11:38.322613884Z" level=info msg="connecting to shim 7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4" address="unix:///run/containerd/s/d839d08e18779e799dcf1924a386f88390981d39a36617449bd8575b29b0f8fb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:38.356401 systemd[1]: Started cri-containerd-7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4.scope - libcontainer container 7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4. Jan 14 01:11:38.361000 audit: BPF prog-id=136 op=LOAD Jan 14 01:11:38.361000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2918 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633830333433353130393032626463353432373462333462613939 Jan 14 01:11:38.361000 audit: BPF prog-id=137 op=LOAD Jan 14 01:11:38.361000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2918 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633830333433353130393032626463353432373462333462613939 Jan 14 01:11:38.361000 audit: BPF prog-id=137 op=UNLOAD Jan 14 01:11:38.361000 audit[2956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633830333433353130393032626463353432373462333462613939 Jan 14 01:11:38.361000 audit: BPF prog-id=136 op=UNLOAD Jan 14 01:11:38.361000 audit[2956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2918 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633830333433353130393032626463353432373462333462613939 Jan 14 01:11:38.361000 audit: BPF prog-id=138 op=LOAD Jan 14 01:11:38.361000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2918 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966633830333433353130393032626463353432373462333462613939 Jan 14 01:11:38.382000 audit: BPF prog-id=139 op=LOAD Jan 14 01:11:38.385000 audit: BPF prog-id=140 op=LOAD Jan 14 01:11:38.385000 audit[2997]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.387000 audit: BPF prog-id=140 op=UNLOAD Jan 14 01:11:38.387000 audit[2997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.388000 audit: BPF prog-id=141 op=LOAD Jan 14 01:11:38.388000 audit[2997]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.389000 audit: BPF prog-id=142 op=LOAD Jan 14 01:11:38.389000 audit[2997]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.389000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:11:38.389000 audit[2997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.390000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:11:38.390000 audit[2997]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.390000 audit: BPF prog-id=143 op=LOAD Jan 14 01:11:38.390000 audit[2997]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2985 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326438663362383730356235636562346339613565626530656430 Jan 14 01:11:38.403749 containerd[1591]: time="2026-01-14T01:11:38.403683808Z" level=info msg="StartContainer for \"9fc80343510902bdc54274b34ba99a03bf83c9afea7716a3c380c911bd557984\" returns successfully" Jan 14 01:11:38.450577 containerd[1591]: time="2026-01-14T01:11:38.450526993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mslgb,Uid:ffd182ab-a098-4191-9980-0113ffbde8a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4\"" Jan 14 01:11:38.455110 containerd[1591]: time="2026-01-14T01:11:38.455071658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:11:38.626931 kubelet[2860]: E0114 01:11:38.626882 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:38.627627 kubelet[2860]: E0114 01:11:38.627401 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:38.883418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378709952.mount: Deactivated successfully. Jan 14 01:11:38.907000 audit[3067]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:38.907000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc068494d0 a2=0 a3=7ffc068494bc items=0 ppid=2970 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:11:38.907000 audit[3068]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:38.907000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb20fb9a0 a2=0 a3=7ffcb20fb98c items=0 ppid=2970 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:11:38.911000 audit[3069]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:38.911000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7780f120 a2=0 a3=7ffd7780f10c items=0 ppid=2970 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:11:38.920000 audit[3070]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:38.920000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde3654c90 a2=0 a3=7ffde3654c7c items=0 ppid=2970 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:11:38.922000 audit[3072]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:38.922000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff37945460 a2=0 a3=7fff3794544c items=0 ppid=2970 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:11:38.932000 audit[3076]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:38.932000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc96102830 a2=0 a3=7ffc9610281c items=0 ppid=2970 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:38.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:11:39.030000 audit[3077]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.030000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc9365c830 a2=0 a3=7ffc9365c81c items=0 ppid=2970 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:11:39.050000 audit[3079]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.050000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7dc58ed0 a2=0 a3=7fff7dc58ebc items=0 ppid=2970 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 14 01:11:39.057000 audit[3082]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.057000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd01ce1810 a2=0 a3=7ffd01ce17fc items=0 ppid=2970 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 01:11:39.059000 audit[3083]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.059000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf829afe0 a2=0 a3=7ffdf829afcc items=0 ppid=2970 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:11:39.063000 audit[3085]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.063000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcecaa2810 a2=0 a3=7ffcecaa27fc items=0 ppid=2970 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:11:39.065000 audit[3086]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.065000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd92867330 a2=0 a3=7ffd9286731c items=0 ppid=2970 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:11:39.069000 audit[3088]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.069000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf0cf3c20 a2=0 a3=7ffdf0cf3c0c items=0 ppid=2970 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.076000 audit[3091]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.076000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1ed3a790 a2=0 a3=7ffd1ed3a77c items=0 ppid=2970 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.078000 audit[3092]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.078000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8c94230 a2=0 a3=7ffeb8c9421c items=0 ppid=2970 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:11:39.082000 audit[3094]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.082000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc70ae010 a2=0 a3=7ffdc70adffc items=0 ppid=2970 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:11:39.088000 audit[3095]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.088000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1e024f90 a2=0 a3=7ffd1e024f7c items=0 ppid=2970 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:11:39.093000 audit[3097]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.093000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0ac092c0 a2=0 a3=7ffd0ac092ac items=0 ppid=2970 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 14 01:11:39.100000 audit[3100]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.100000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffb0f7d20 a2=0 a3=7ffffb0f7d0c items=0 ppid=2970 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 01:11:39.107000 audit[3103]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.107000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce8488410 a2=0 a3=7ffce84883fc items=0 ppid=2970 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 01:11:39.109000 audit[3104]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.109000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea594c030 a2=0 a3=7ffea594c01c items=0 ppid=2970 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:11:39.115000 audit[3106]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.115000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe2e088000 a2=0 a3=7ffe2e087fec items=0 ppid=2970 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.122000 audit[3109]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.122000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdae7bc960 a2=0 a3=7ffdae7bc94c items=0 ppid=2970 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.124000 audit[3110]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.124000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb24be610 a2=0 a3=7ffcb24be5fc items=0 ppid=2970 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:11:39.129000 audit[3112]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:39.129000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffebe2f83e0 a2=0 a3=7ffebe2f83cc items=0 ppid=2970 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:11:39.170000 audit[3118]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:39.170000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe3cf22a60 a2=0 a3=7ffe3cf22a4c items=0 ppid=2970 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:39.180000 audit[3118]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:39.180000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe3cf22a60 a2=0 a3=7ffe3cf22a4c items=0 ppid=2970 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:39.183000 audit[3123]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.183000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdc4ecddd0 a2=0 a3=7ffdc4ecddbc items=0 ppid=2970 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:11:39.188000 audit[3125]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.188000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe785cce0 a2=0 a3=7fffe785cccc items=0 ppid=2970 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 01:11:39.195000 audit[3128]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.195000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb434c1b0 a2=0 a3=7ffdb434c19c items=0 ppid=2970 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 14 01:11:39.198000 audit[3129]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.198000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4607b520 a2=0 a3=7ffe4607b50c items=0 ppid=2970 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:11:39.203000 audit[3131]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.203000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefaa18a10 a2=0 a3=7ffefaa189fc items=0 ppid=2970 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:11:39.205000 audit[3132]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.205000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb18a96a0 a2=0 a3=7ffeb18a968c items=0 ppid=2970 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:11:39.211000 audit[3134]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.211000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5e87bfd0 a2=0 a3=7ffd5e87bfbc items=0 ppid=2970 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.220000 audit[3137]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.220000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc87cd3fc0 a2=0 a3=7ffc87cd3fac items=0 ppid=2970 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.222000 audit[3138]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.222000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2159f60 a2=0 a3=7ffef2159f4c items=0 ppid=2970 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:11:39.227000 audit[3140]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.227000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffceff83020 a2=0 a3=7ffceff8300c items=0 ppid=2970 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:11:39.230000 audit[3141]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.230000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebc07d0e0 a2=0 a3=7ffebc07d0cc items=0 ppid=2970 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:11:39.235000 audit[3143]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.235000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8933cb90 a2=0 a3=7ffe8933cb7c items=0 ppid=2970 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 01:11:39.241000 audit[3146]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.241000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd00e8bd30 a2=0 a3=7ffd00e8bd1c items=0 ppid=2970 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 01:11:39.249000 audit[3149]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.249000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5947d220 a2=0 a3=7ffe5947d20c items=0 ppid=2970 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 14 01:11:39.252000 audit[3150]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.252000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe997fa470 a2=0 a3=7ffe997fa45c items=0 ppid=2970 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:11:39.257000 audit[3152]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.257000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff4717bbf0 a2=0 a3=7fff4717bbdc items=0 ppid=2970 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.264000 audit[3155]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.264000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7b4b1fa0 a2=0 a3=7ffe7b4b1f8c items=0 ppid=2970 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:39.266000 audit[3156]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.266000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2d8f7c80 a2=0 a3=7fff2d8f7c6c items=0 ppid=2970 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:11:39.271000 audit[3158]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.271000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff9c33b130 a2=0 a3=7fff9c33b11c items=0 ppid=2970 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:11:39.273000 audit[3159]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.273000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff577659d0 a2=0 a3=7fff577659bc items=0 ppid=2970 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:11:39.278000 audit[3161]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.278000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffd04e7430 a2=0 a3=7fffd04e741c items=0 ppid=2970 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:39.285000 audit[3164]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:39.285000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6946f870 a2=0 a3=7ffe6946f85c items=0 ppid=2970 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:39.291000 audit[3166]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:11:39.291000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffccaba52b0 a2=0 a3=7ffccaba529c items=0 ppid=2970 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.291000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:39.292000 audit[3166]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:11:39.292000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffccaba52b0 a2=0 a3=7ffccaba529c items=0 ppid=2970 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.292000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:40.035599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427312102.mount: Deactivated successfully. Jan 14 01:11:40.940823 containerd[1591]: time="2026-01-14T01:11:40.940745165Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:40.942684 containerd[1591]: time="2026-01-14T01:11:40.942311943Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:11:40.943853 containerd[1591]: time="2026-01-14T01:11:40.943799399Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:40.947618 containerd[1591]: time="2026-01-14T01:11:40.947553160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:40.948736 containerd[1591]: time="2026-01-14T01:11:40.948685107Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.493343878s" Jan 14 01:11:40.948736 containerd[1591]: time="2026-01-14T01:11:40.948738600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:11:40.961602 containerd[1591]: time="2026-01-14T01:11:40.961518692Z" level=info msg="CreateContainer within sandbox \"7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:11:40.974885 containerd[1591]: time="2026-01-14T01:11:40.974827201Z" level=info msg="Container e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:40.987494 containerd[1591]: time="2026-01-14T01:11:40.987402289Z" level=info msg="CreateContainer within sandbox \"7b2d8f3b8705b5ceb4c9a5ebe0ed035d157af57cd958a6da31038b2b1fb5ced4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf\"" Jan 14 01:11:40.988720 containerd[1591]: time="2026-01-14T01:11:40.988595716Z" level=info msg="StartContainer for \"e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf\"" Jan 14 01:11:40.990506 containerd[1591]: time="2026-01-14T01:11:40.990457393Z" level=info msg="connecting to shim e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf" address="unix:///run/containerd/s/d839d08e18779e799dcf1924a386f88390981d39a36617449bd8575b29b0f8fb" protocol=ttrpc version=3 Jan 14 01:11:41.030106 systemd[1]: Started cri-containerd-e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf.scope - libcontainer container e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf. Jan 14 01:11:41.051000 audit: BPF prog-id=144 op=LOAD Jan 14 01:11:41.052000 audit: BPF prog-id=145 op=LOAD Jan 14 01:11:41.052000 audit[3175]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.052000 audit: BPF prog-id=145 op=UNLOAD Jan 14 01:11:41.052000 audit[3175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.053000 audit: BPF prog-id=146 op=LOAD Jan 14 01:11:41.053000 audit[3175]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.053000 audit: BPF prog-id=147 op=LOAD Jan 14 01:11:41.053000 audit[3175]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.053000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:11:41.053000 audit[3175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.053000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:11:41.053000 audit[3175]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.053000 audit: BPF prog-id=148 op=LOAD Jan 14 01:11:41.053000 audit[3175]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2985 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:41.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535663637323331326636313965643839316439383531653638636635 Jan 14 01:11:41.081945 containerd[1591]: time="2026-01-14T01:11:41.081893564Z" level=info msg="StartContainer for \"e5f672312f619ed891d9851e68cf541621ea2e37678311291cd74c97a92512bf\" returns successfully" Jan 14 01:11:41.649010 kubelet[2860]: I0114 01:11:41.648802 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z8t82" podStartSLOduration=4.64865401 podStartE2EDuration="4.64865401s" podCreationTimestamp="2026-01-14 01:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:38.64780063 +0000 UTC m=+6.396801675" watchObservedRunningTime="2026-01-14 01:11:41.64865401 +0000 UTC m=+9.397655043" Jan 14 01:11:41.650856 kubelet[2860]: I0114 01:11:41.650110 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-mslgb" podStartSLOduration=2.152993089 podStartE2EDuration="4.650089863s" podCreationTimestamp="2026-01-14 01:11:37 +0000 UTC" firstStartedPulling="2026-01-14 01:11:38.453063765 +0000 UTC m=+6.202064788" lastFinishedPulling="2026-01-14 01:11:40.950160523 +0000 UTC m=+8.699161562" observedRunningTime="2026-01-14 01:11:41.648064049 +0000 UTC m=+9.397065091" watchObservedRunningTime="2026-01-14 01:11:41.650089863 +0000 UTC m=+9.399090905" Jan 14 01:11:42.125241 kubelet[2860]: E0114 01:11:42.125189 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:42.638541 kubelet[2860]: E0114 01:11:42.638493 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:48.569823 sudo[1883]: pam_unix(sudo:session): session closed for user root Jan 14 01:11:48.568000 audit[1883]: USER_END pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:48.571222 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:11:48.571323 kernel: audit: type=1106 audit(1768353108.568:520): pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:48.568000 audit[1883]: CRED_DISP pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:48.587723 kernel: audit: type=1104 audit(1768353108.568:521): pid=1883 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:11:48.631552 sshd[1882]: Connection closed by 4.153.228.146 port 45472 Jan 14 01:11:48.634048 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:48.637000 audit[1878]: USER_END pid=1878 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:48.642948 systemd[1]: sshd@8-143.198.154.109:22-4.153.228.146:45472.service: Deactivated successfully. Jan 14 01:11:48.646793 kernel: audit: type=1106 audit(1768353108.637:522): pid=1878 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:48.647898 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:11:48.648541 systemd[1]: session-10.scope: Consumed 5.800s CPU time, 157.4M memory peak. Jan 14 01:11:48.653716 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:11:48.656423 systemd-logind[1569]: Removed session 10. Jan 14 01:11:48.637000 audit[1878]: CRED_DISP pid=1878 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:48.662833 kernel: audit: type=1104 audit(1768353108.637:523): pid=1878 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:11:48.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.154.109:22-4.153.228.146:45472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:48.672704 kernel: audit: type=1131 audit(1768353108.642:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-143.198.154.109:22-4.153.228.146:45472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:49.319000 audit[3256]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:49.326818 kernel: audit: type=1325 audit(1768353109.319:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:49.319000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff4de51040 a2=0 a3=7fff4de5102c items=0 ppid=2970 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:49.335604 kernel: audit: type=1300 audit(1768353109.319:525): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff4de51040 a2=0 a3=7fff4de5102c items=0 ppid=2970 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.335790 kernel: audit: type=1327 audit(1768353109.319:525): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:49.325000 audit[3256]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:49.339357 kernel: audit: type=1325 audit(1768353109.325:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:49.325000 audit[3256]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4de51040 a2=0 a3=0 items=0 ppid=2970 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.350764 kernel: audit: type=1300 audit(1768353109.325:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4de51040 a2=0 a3=0 items=0 ppid=2970 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:50.348000 audit[3258]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:50.348000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd75da500 a2=0 a3=7fffd75da4ec items=0 ppid=2970 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:50.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:50.355000 audit[3258]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:50.355000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd75da500 a2=0 a3=0 items=0 ppid=2970 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:50.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:53.119000 audit[3260]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:53.119000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffeb0d1f260 a2=0 a3=7ffeb0d1f24c items=0 ppid=2970 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:53.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:53.124000 audit[3260]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:53.124000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb0d1f260 a2=0 a3=0 items=0 ppid=2970 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:53.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:54.147000 audit[3262]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:54.151394 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 01:11:54.151518 kernel: audit: type=1325 audit(1768353114.147:531): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:54.156783 kernel: audit: type=1300 audit(1768353114.147:531): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc99dd50c0 a2=0 a3=7ffc99dd50ac items=0 ppid=2970 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:54.147000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc99dd50c0 a2=0 a3=7ffc99dd50ac items=0 ppid=2970 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:54.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:54.166736 kernel: audit: type=1327 audit(1768353114.147:531): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:54.166000 audit[3262]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:54.166000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc99dd50c0 a2=0 a3=0 items=0 ppid=2970 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:54.173791 kernel: audit: type=1325 audit(1768353114.166:532): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:54.173936 kernel: audit: type=1300 audit(1768353114.166:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc99dd50c0 a2=0 a3=0 items=0 ppid=2970 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:54.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:54.180413 kernel: audit: type=1327 audit(1768353114.166:532): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:55.722000 audit[3265]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:55.729756 kernel: audit: type=1325 audit(1768353115.722:533): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:55.722000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3b793090 a2=0 a3=7fff3b79307c items=0 ppid=2970 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:55.739819 kernel: audit: type=1300 audit(1768353115.722:533): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3b793090 a2=0 a3=7fff3b79307c items=0 ppid=2970 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:55.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:55.738000 audit[3265]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:55.746698 kernel: audit: type=1327 audit(1768353115.722:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:55.746994 kernel: audit: type=1325 audit(1768353115.738:534): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:55.738000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3b793090 a2=0 a3=0 items=0 ppid=2970 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:55.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:55.777298 systemd[1]: Created slice kubepods-besteffort-pode2ebe4bd_85cb_429f_9253_6799187d6d71.slice - libcontainer container kubepods-besteffort-pode2ebe4bd_85cb_429f_9253_6799187d6d71.slice. Jan 14 01:11:55.881693 kubelet[2860]: I0114 01:11:55.881113 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2ebe4bd-85cb-429f-9253-6799187d6d71-typha-certs\") pod \"calico-typha-566fc88784-v2ndt\" (UID: \"e2ebe4bd-85cb-429f-9253-6799187d6d71\") " pod="calico-system/calico-typha-566fc88784-v2ndt" Jan 14 01:11:55.881693 kubelet[2860]: I0114 01:11:55.881183 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2ebe4bd-85cb-429f-9253-6799187d6d71-tigera-ca-bundle\") pod \"calico-typha-566fc88784-v2ndt\" (UID: \"e2ebe4bd-85cb-429f-9253-6799187d6d71\") " pod="calico-system/calico-typha-566fc88784-v2ndt" Jan 14 01:11:55.881693 kubelet[2860]: I0114 01:11:55.881223 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4jz\" (UniqueName: \"kubernetes.io/projected/e2ebe4bd-85cb-429f-9253-6799187d6d71-kube-api-access-vx4jz\") pod \"calico-typha-566fc88784-v2ndt\" (UID: \"e2ebe4bd-85cb-429f-9253-6799187d6d71\") " pod="calico-system/calico-typha-566fc88784-v2ndt" Jan 14 01:11:55.943302 systemd[1]: Created slice kubepods-besteffort-pod1304d5aa_a8d9_4b64_8652_987d4289ac9a.slice - libcontainer container kubepods-besteffort-pod1304d5aa_a8d9_4b64_8652_987d4289ac9a.slice. Jan 14 01:11:55.982906 kubelet[2860]: I0114 01:11:55.982438 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-cni-bin-dir\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.983877 kubelet[2860]: I0114 01:11:55.983658 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1304d5aa-a8d9-4b64-8652-987d4289ac9a-node-certs\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.984453 kubelet[2860]: I0114 01:11:55.984342 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-flexvol-driver-host\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.984453 kubelet[2860]: I0114 01:11:55.984402 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqfg\" (UniqueName: \"kubernetes.io/projected/1304d5aa-a8d9-4b64-8652-987d4289ac9a-kube-api-access-pdqfg\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.984453 kubelet[2860]: I0114 01:11:55.984430 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-policysync\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.985995 kubelet[2860]: I0114 01:11:55.985805 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-lib-modules\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.985995 kubelet[2860]: I0114 01:11:55.985841 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1304d5aa-a8d9-4b64-8652-987d4289ac9a-tigera-ca-bundle\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.986395 kubelet[2860]: I0114 01:11:55.986190 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-xtables-lock\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.986395 kubelet[2860]: I0114 01:11:55.986345 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-cni-net-dir\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.986672 kubelet[2860]: I0114 01:11:55.986582 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-var-run-calico\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.986672 kubelet[2860]: I0114 01:11:55.986628 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-cni-log-dir\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:55.986853 kubelet[2860]: I0114 01:11:55.986703 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1304d5aa-a8d9-4b64-8652-987d4289ac9a-var-lib-calico\") pod \"calico-node-79xjv\" (UID: \"1304d5aa-a8d9-4b64-8652-987d4289ac9a\") " pod="calico-system/calico-node-79xjv" Jan 14 01:11:56.092700 kubelet[2860]: E0114 01:11:56.092031 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:56.095999 containerd[1591]: time="2026-01-14T01:11:56.095941794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566fc88784-v2ndt,Uid:e2ebe4bd-85cb-429f-9253-6799187d6d71,Namespace:calico-system,Attempt:0,}" Jan 14 01:11:56.107165 kubelet[2860]: E0114 01:11:56.107089 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.107165 kubelet[2860]: W0114 01:11:56.107130 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.107165 kubelet[2860]: E0114 01:11:56.107161 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.109775 kubelet[2860]: E0114 01:11:56.109678 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.109775 kubelet[2860]: W0114 01:11:56.109703 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.109775 kubelet[2860]: E0114 01:11:56.109726 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.110656 kubelet[2860]: E0114 01:11:56.110076 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.110656 kubelet[2860]: W0114 01:11:56.110088 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.110656 kubelet[2860]: E0114 01:11:56.110101 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.112536 kubelet[2860]: E0114 01:11:56.112041 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.112536 kubelet[2860]: W0114 01:11:56.112153 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.112536 kubelet[2860]: E0114 01:11:56.112177 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.114687 kubelet[2860]: E0114 01:11:56.114590 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.114687 kubelet[2860]: W0114 01:11:56.114617 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.114687 kubelet[2860]: E0114 01:11:56.114642 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.115167 kubelet[2860]: E0114 01:11:56.115141 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.117095 kubelet[2860]: W0114 01:11:56.115159 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.117214 kubelet[2860]: E0114 01:11:56.117106 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.117505 kubelet[2860]: E0114 01:11:56.117481 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.117505 kubelet[2860]: W0114 01:11:56.117500 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.117660 kubelet[2860]: E0114 01:11:56.117516 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.118063 kubelet[2860]: E0114 01:11:56.118044 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.118063 kubelet[2860]: W0114 01:11:56.118062 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.118194 kubelet[2860]: E0114 01:11:56.118077 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.118887 kubelet[2860]: E0114 01:11:56.118864 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.118887 kubelet[2860]: W0114 01:11:56.118882 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.119030 kubelet[2860]: E0114 01:11:56.118897 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.119481 kubelet[2860]: E0114 01:11:56.119444 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.119481 kubelet[2860]: W0114 01:11:56.119468 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.119481 kubelet[2860]: E0114 01:11:56.119486 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.121117 kubelet[2860]: E0114 01:11:56.120941 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.121117 kubelet[2860]: W0114 01:11:56.120962 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.121117 kubelet[2860]: E0114 01:11:56.120984 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.122265 kubelet[2860]: E0114 01:11:56.122211 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.122265 kubelet[2860]: W0114 01:11:56.122231 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.122265 kubelet[2860]: E0114 01:11:56.122248 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.122994 kubelet[2860]: E0114 01:11:56.122874 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.122994 kubelet[2860]: W0114 01:11:56.122888 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.122994 kubelet[2860]: E0114 01:11:56.122902 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.123228 kubelet[2860]: E0114 01:11:56.123196 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.123228 kubelet[2860]: W0114 01:11:56.123206 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.123228 kubelet[2860]: E0114 01:11:56.123217 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.123612 kubelet[2860]: E0114 01:11:56.123577 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.123612 kubelet[2860]: W0114 01:11:56.123587 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.123612 kubelet[2860]: E0114 01:11:56.123597 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.125860 kubelet[2860]: E0114 01:11:56.124119 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.125860 kubelet[2860]: W0114 01:11:56.125711 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.125860 kubelet[2860]: E0114 01:11:56.125745 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.126422 kubelet[2860]: E0114 01:11:56.126336 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.126422 kubelet[2860]: W0114 01:11:56.126355 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.126422 kubelet[2860]: E0114 01:11:56.126376 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.160976 containerd[1591]: time="2026-01-14T01:11:56.158585549Z" level=info msg="connecting to shim 67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f" address="unix:///run/containerd/s/9a4f7535b8482c66435cb025133bf1faa159fbda117f1c0f882daaf803dfe839" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:56.170924 kubelet[2860]: E0114 01:11:56.169657 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:11:56.189705 kubelet[2860]: E0114 01:11:56.188961 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.189937 kubelet[2860]: W0114 01:11:56.189912 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.191932 kubelet[2860]: E0114 01:11:56.190618 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.253586 kubelet[2860]: E0114 01:11:56.253450 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:56.256701 containerd[1591]: time="2026-01-14T01:11:56.256533202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79xjv,Uid:1304d5aa-a8d9-4b64-8652-987d4289ac9a,Namespace:calico-system,Attempt:0,}" Jan 14 01:11:56.258927 kubelet[2860]: E0114 01:11:56.258859 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.258927 kubelet[2860]: W0114 01:11:56.258890 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.259985 kubelet[2860]: E0114 01:11:56.259236 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.260822 systemd[1]: Started cri-containerd-67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f.scope - libcontainer container 67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f. Jan 14 01:11:56.265570 kubelet[2860]: E0114 01:11:56.265389 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.265570 kubelet[2860]: W0114 01:11:56.265427 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.265570 kubelet[2860]: E0114 01:11:56.265453 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.266995 kubelet[2860]: E0114 01:11:56.266918 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.267496 kubelet[2860]: W0114 01:11:56.267083 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.267496 kubelet[2860]: E0114 01:11:56.267120 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.269865 kubelet[2860]: E0114 01:11:56.269807 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.269865 kubelet[2860]: W0114 01:11:56.269835 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.271317 kubelet[2860]: E0114 01:11:56.270736 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.273021 kubelet[2860]: E0114 01:11:56.272996 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.273338 kubelet[2860]: W0114 01:11:56.273311 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.273806 kubelet[2860]: E0114 01:11:56.273760 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.275110 kubelet[2860]: E0114 01:11:56.275010 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.275110 kubelet[2860]: W0114 01:11:56.275030 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.275110 kubelet[2860]: E0114 01:11:56.275052 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.276313 kubelet[2860]: E0114 01:11:56.275985 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.276686 kubelet[2860]: W0114 01:11:56.276477 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.276686 kubelet[2860]: E0114 01:11:56.276517 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.277706 kubelet[2860]: E0114 01:11:56.277175 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.277706 kubelet[2860]: W0114 01:11:56.277193 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.277706 kubelet[2860]: E0114 01:11:56.277212 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.278282 kubelet[2860]: E0114 01:11:56.278170 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.278282 kubelet[2860]: W0114 01:11:56.278193 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.278282 kubelet[2860]: E0114 01:11:56.278214 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.280387 kubelet[2860]: E0114 01:11:56.280367 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.280735 kubelet[2860]: W0114 01:11:56.280429 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.280735 kubelet[2860]: E0114 01:11:56.280450 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.281594 kubelet[2860]: E0114 01:11:56.281268 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.281594 kubelet[2860]: W0114 01:11:56.281320 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.281594 kubelet[2860]: E0114 01:11:56.281340 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.282718 kubelet[2860]: E0114 01:11:56.282601 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.282718 kubelet[2860]: W0114 01:11:56.282618 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.282718 kubelet[2860]: E0114 01:11:56.282635 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.284085 kubelet[2860]: E0114 01:11:56.283971 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.284085 kubelet[2860]: W0114 01:11:56.283992 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.284085 kubelet[2860]: E0114 01:11:56.284019 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.284576 kubelet[2860]: E0114 01:11:56.284493 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.284576 kubelet[2860]: W0114 01:11:56.284511 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.284576 kubelet[2860]: E0114 01:11:56.284526 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.285960 kubelet[2860]: E0114 01:11:56.285795 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.285960 kubelet[2860]: W0114 01:11:56.285817 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.285960 kubelet[2860]: E0114 01:11:56.285836 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.286444 kubelet[2860]: E0114 01:11:56.286318 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.286444 kubelet[2860]: W0114 01:11:56.286337 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.286444 kubelet[2860]: E0114 01:11:56.286354 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.287569 kubelet[2860]: E0114 01:11:56.286901 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.287569 kubelet[2860]: W0114 01:11:56.286919 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.287569 kubelet[2860]: E0114 01:11:56.286934 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.288077 kubelet[2860]: E0114 01:11:56.287983 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.288077 kubelet[2860]: W0114 01:11:56.288005 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.288077 kubelet[2860]: E0114 01:11:56.288021 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.288540 kubelet[2860]: E0114 01:11:56.288463 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.288540 kubelet[2860]: W0114 01:11:56.288476 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.288540 kubelet[2860]: E0114 01:11:56.288488 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.290024 kubelet[2860]: E0114 01:11:56.290009 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.290229 kubelet[2860]: W0114 01:11:56.290119 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.290229 kubelet[2860]: E0114 01:11:56.290140 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.291446 kubelet[2860]: E0114 01:11:56.291425 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.291682 kubelet[2860]: W0114 01:11:56.291543 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.291682 kubelet[2860]: E0114 01:11:56.291563 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.291682 kubelet[2860]: I0114 01:11:56.291602 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c51b2de-d8d9-4e42-af77-3bf2696395e2-varrun\") pod \"csi-node-driver-598dd\" (UID: \"9c51b2de-d8d9-4e42-af77-3bf2696395e2\") " pod="calico-system/csi-node-driver-598dd" Jan 14 01:11:56.293101 kubelet[2860]: E0114 01:11:56.292956 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.293101 kubelet[2860]: W0114 01:11:56.292975 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.293101 kubelet[2860]: E0114 01:11:56.292994 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.293101 kubelet[2860]: I0114 01:11:56.293032 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c51b2de-d8d9-4e42-af77-3bf2696395e2-registration-dir\") pod \"csi-node-driver-598dd\" (UID: \"9c51b2de-d8d9-4e42-af77-3bf2696395e2\") " pod="calico-system/csi-node-driver-598dd" Jan 14 01:11:56.293590 kubelet[2860]: E0114 01:11:56.293574 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.293801 kubelet[2860]: W0114 01:11:56.293649 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.293801 kubelet[2860]: E0114 01:11:56.293690 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.293801 kubelet[2860]: I0114 01:11:56.293716 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c51b2de-d8d9-4e42-af77-3bf2696395e2-socket-dir\") pod \"csi-node-driver-598dd\" (UID: \"9c51b2de-d8d9-4e42-af77-3bf2696395e2\") " pod="calico-system/csi-node-driver-598dd" Jan 14 01:11:56.294433 kubelet[2860]: E0114 01:11:56.294008 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.294632 kubelet[2860]: W0114 01:11:56.294500 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.294632 kubelet[2860]: E0114 01:11:56.294521 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.294632 kubelet[2860]: I0114 01:11:56.294542 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c51b2de-d8d9-4e42-af77-3bf2696395e2-kubelet-dir\") pod \"csi-node-driver-598dd\" (UID: \"9c51b2de-d8d9-4e42-af77-3bf2696395e2\") " pod="calico-system/csi-node-driver-598dd" Jan 14 01:11:56.295003 kubelet[2860]: E0114 01:11:56.294901 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.295003 kubelet[2860]: W0114 01:11:56.294915 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.295003 kubelet[2860]: E0114 01:11:56.294927 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.295003 kubelet[2860]: I0114 01:11:56.294948 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbns\" (UniqueName: \"kubernetes.io/projected/9c51b2de-d8d9-4e42-af77-3bf2696395e2-kube-api-access-lqbns\") pod \"csi-node-driver-598dd\" (UID: \"9c51b2de-d8d9-4e42-af77-3bf2696395e2\") " pod="calico-system/csi-node-driver-598dd" Jan 14 01:11:56.295262 kubelet[2860]: E0114 01:11:56.295240 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.295343 kubelet[2860]: W0114 01:11:56.295261 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.295343 kubelet[2860]: E0114 01:11:56.295278 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.297966 kubelet[2860]: E0114 01:11:56.297933 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.297966 kubelet[2860]: W0114 01:11:56.297957 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.297966 kubelet[2860]: E0114 01:11:56.297981 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.298651 kubelet[2860]: E0114 01:11:56.298289 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.298651 kubelet[2860]: W0114 01:11:56.298301 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.298651 kubelet[2860]: E0114 01:11:56.298316 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.302630 kubelet[2860]: E0114 01:11:56.302587 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.302630 kubelet[2860]: W0114 01:11:56.302621 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.303167 kubelet[2860]: E0114 01:11:56.302649 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.303814 kubelet[2860]: E0114 01:11:56.303778 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.303814 kubelet[2860]: W0114 01:11:56.303806 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.304193 kubelet[2860]: E0114 01:11:56.303831 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.305843 kubelet[2860]: E0114 01:11:56.305802 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.305843 kubelet[2860]: W0114 01:11:56.305836 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.306011 kubelet[2860]: E0114 01:11:56.305864 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.308104 kubelet[2860]: E0114 01:11:56.307946 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.308104 kubelet[2860]: W0114 01:11:56.307982 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.308104 kubelet[2860]: E0114 01:11:56.308021 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.308833 kubelet[2860]: E0114 01:11:56.308688 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.308833 kubelet[2860]: W0114 01:11:56.308710 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.308833 kubelet[2860]: E0114 01:11:56.308729 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.309303 kubelet[2860]: E0114 01:11:56.309266 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.309303 kubelet[2860]: W0114 01:11:56.309280 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.309674 kubelet[2860]: E0114 01:11:56.309516 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.309982 kubelet[2860]: E0114 01:11:56.309951 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.310087 kubelet[2860]: W0114 01:11:56.310031 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.310087 kubelet[2860]: E0114 01:11:56.310065 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.345392 containerd[1591]: time="2026-01-14T01:11:56.344840978Z" level=info msg="connecting to shim 75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77" address="unix:///run/containerd/s/3f4299f8f0dd96965f86d7a735c7d98cb6de67e690939b2f189fca42cf92ac48" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:56.397070 kubelet[2860]: E0114 01:11:56.396634 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.397070 kubelet[2860]: W0114 01:11:56.396856 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.397070 kubelet[2860]: E0114 01:11:56.396890 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.398592 kubelet[2860]: E0114 01:11:56.398213 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.398592 kubelet[2860]: W0114 01:11:56.398236 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.398592 kubelet[2860]: E0114 01:11:56.398275 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.398592 kubelet[2860]: E0114 01:11:56.398575 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.398592 kubelet[2860]: W0114 01:11:56.398589 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.398592 kubelet[2860]: E0114 01:11:56.398606 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.399450 kubelet[2860]: E0114 01:11:56.399004 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.399450 kubelet[2860]: W0114 01:11:56.399019 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.399450 kubelet[2860]: E0114 01:11:56.399033 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.399494 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.400344 kubelet[2860]: W0114 01:11:56.399508 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.399523 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.399930 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.400344 kubelet[2860]: W0114 01:11:56.399943 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.399957 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.400219 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.400344 kubelet[2860]: W0114 01:11:56.400230 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.400344 kubelet[2860]: E0114 01:11:56.400256 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.400612 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.404896 kubelet[2860]: W0114 01:11:56.400624 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.400637 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.400925 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.404896 kubelet[2860]: W0114 01:11:56.400939 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.400953 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.401253 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.404896 kubelet[2860]: W0114 01:11:56.401265 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.401276 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.404896 kubelet[2860]: E0114 01:11:56.401585 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.405818 kubelet[2860]: W0114 01:11:56.401596 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.401608 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.401874 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.405818 kubelet[2860]: W0114 01:11:56.401885 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.401897 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.402109 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.405818 kubelet[2860]: W0114 01:11:56.402117 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.402127 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.405818 kubelet[2860]: E0114 01:11:56.402317 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.405818 kubelet[2860]: W0114 01:11:56.402327 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.402337 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403141 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.406723 kubelet[2860]: W0114 01:11:56.403154 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403167 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403360 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.406723 kubelet[2860]: W0114 01:11:56.403368 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403377 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403570 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.406723 kubelet[2860]: W0114 01:11:56.403584 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.406723 kubelet[2860]: E0114 01:11:56.403599 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.403949 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.407328 kubelet[2860]: W0114 01:11:56.403961 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.403977 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.404193 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.407328 kubelet[2860]: W0114 01:11:56.404202 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.404212 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.404342 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.407328 kubelet[2860]: W0114 01:11:56.404350 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.404356 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.407328 kubelet[2860]: E0114 01:11:56.404508 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.408158 kubelet[2860]: W0114 01:11:56.404515 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.408158 kubelet[2860]: E0114 01:11:56.404522 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.410245 kubelet[2860]: E0114 01:11:56.409873 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.410245 kubelet[2860]: W0114 01:11:56.409906 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.410245 kubelet[2860]: E0114 01:11:56.409935 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.410464 kubelet[2860]: E0114 01:11:56.410289 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.410464 kubelet[2860]: W0114 01:11:56.410304 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.410464 kubelet[2860]: E0114 01:11:56.410317 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.413201 kubelet[2860]: E0114 01:11:56.413149 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.413201 kubelet[2860]: W0114 01:11:56.413182 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.413201 kubelet[2860]: E0114 01:11:56.413208 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.417604 kubelet[2860]: E0114 01:11:56.417525 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.417604 kubelet[2860]: W0114 01:11:56.417561 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.417604 kubelet[2860]: E0114 01:11:56.417592 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.441856 kubelet[2860]: E0114 01:11:56.441802 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:56.441856 kubelet[2860]: W0114 01:11:56.441838 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:56.441856 kubelet[2860]: E0114 01:11:56.441866 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:56.465052 systemd[1]: Started cri-containerd-75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77.scope - libcontainer container 75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77. Jan 14 01:11:56.471000 audit: BPF prog-id=149 op=LOAD Jan 14 01:11:56.474000 audit: BPF prog-id=150 op=LOAD Jan 14 01:11:56.474000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.474000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:11:56.474000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.476000 audit: BPF prog-id=151 op=LOAD Jan 14 01:11:56.476000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.476000 audit: BPF prog-id=152 op=LOAD Jan 14 01:11:56.476000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.476000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:11:56.476000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.476000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:11:56.476000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.476000 audit: BPF prog-id=153 op=LOAD Jan 14 01:11:56.476000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=3295 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637323338623130646236303937373032316433323566313734363764 Jan 14 01:11:56.598000 audit: BPF prog-id=154 op=LOAD Jan 14 01:11:56.601000 audit: BPF prog-id=155 op=LOAD Jan 14 01:11:56.601000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=156 op=LOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=157 op=LOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.603000 audit: BPF prog-id=158 op=LOAD Jan 14 01:11:56.603000 audit[3388]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3376 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373638626138633634613732633733623832373566656135623133 Jan 14 01:11:56.706546 containerd[1591]: time="2026-01-14T01:11:56.706480910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566fc88784-v2ndt,Uid:e2ebe4bd-85cb-429f-9253-6799187d6d71,Namespace:calico-system,Attempt:0,} returns sandbox id \"67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f\"" Jan 14 01:11:56.716156 kubelet[2860]: E0114 01:11:56.716111 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:56.717269 containerd[1591]: time="2026-01-14T01:11:56.717197179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79xjv,Uid:1304d5aa-a8d9-4b64-8652-987d4289ac9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\"" Jan 14 01:11:56.720310 containerd[1591]: time="2026-01-14T01:11:56.720038016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:11:56.720511 kubelet[2860]: E0114 01:11:56.720408 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:56.810000 audit[3455]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:56.810000 audit[3455]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8f652650 a2=0 a3=7fff8f65263c items=0 ppid=2970 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:56.816000 audit[3455]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:56.816000 audit[3455]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8f652650 a2=0 a3=0 items=0 ppid=2970 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:56.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:57.518605 kubelet[2860]: E0114 01:11:57.518131 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:11:58.237797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591795926.mount: Deactivated successfully. Jan 14 01:11:59.069936 containerd[1591]: time="2026-01-14T01:11:59.069730615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:11:59.090382 containerd[1591]: time="2026-01-14T01:11:59.089750875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.369493427s" Jan 14 01:11:59.090382 containerd[1591]: time="2026-01-14T01:11:59.089800682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:11:59.093309 containerd[1591]: time="2026-01-14T01:11:59.093220304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:11:59.094816 containerd[1591]: time="2026-01-14T01:11:59.093433907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:59.094816 containerd[1591]: time="2026-01-14T01:11:59.094090229Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:59.094816 containerd[1591]: time="2026-01-14T01:11:59.094552454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:59.123460 containerd[1591]: time="2026-01-14T01:11:59.123399746Z" level=info msg="CreateContainer within sandbox \"67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:11:59.166964 containerd[1591]: time="2026-01-14T01:11:59.166910188Z" level=info msg="Container 6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:59.185263 containerd[1591]: time="2026-01-14T01:11:59.185176405Z" level=info msg="CreateContainer within sandbox \"67238b10db60977021d325f17467d49841074f526d2df2f1e9685f9fed61256f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c\"" Jan 14 01:11:59.186762 containerd[1591]: time="2026-01-14T01:11:59.186418812Z" level=info msg="StartContainer for \"6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c\"" Jan 14 01:11:59.188531 containerd[1591]: time="2026-01-14T01:11:59.188446215Z" level=info msg="connecting to shim 6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c" address="unix:///run/containerd/s/9a4f7535b8482c66435cb025133bf1faa159fbda117f1c0f882daaf803dfe839" protocol=ttrpc version=3 Jan 14 01:11:59.227121 systemd[1]: Started cri-containerd-6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c.scope - libcontainer container 6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c. Jan 14 01:11:59.248000 audit: BPF prog-id=159 op=LOAD Jan 14 01:11:59.251694 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 14 01:11:59.251817 kernel: audit: type=1334 audit(1768353119.248:553): prog-id=159 op=LOAD Jan 14 01:11:59.254000 audit: BPF prog-id=160 op=LOAD Jan 14 01:11:59.256222 kernel: audit: type=1334 audit(1768353119.254:554): prog-id=160 op=LOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.268660 kernel: audit: type=1300 audit(1768353119.254:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.268895 kernel: audit: type=1327 audit(1768353119.254:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:11:59.274031 kernel: audit: type=1334 audit(1768353119.254:555): prog-id=160 op=UNLOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.278212 kernel: audit: type=1300 audit(1768353119.254:555): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.285209 kernel: audit: type=1327 audit(1768353119.254:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=161 op=LOAD Jan 14 01:11:59.291985 kernel: audit: type=1334 audit(1768353119.254:556): prog-id=161 op=LOAD Jan 14 01:11:59.292089 kernel: audit: type=1300 audit(1768353119.254:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.305716 kernel: audit: type=1327 audit(1768353119.254:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=162 op=LOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.254000 audit: BPF prog-id=163 op=LOAD Jan 14 01:11:59.254000 audit[3466]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3295 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636393666643561646439616231313065626134386461306566656230 Jan 14 01:11:59.334226 containerd[1591]: time="2026-01-14T01:11:59.332520011Z" level=info msg="StartContainer for \"6696fd5add9ab110eba48da0efeb0190c65d8d785afd96bf8ac4ca9c7d894c5c\" returns successfully" Jan 14 01:11:59.519732 kubelet[2860]: E0114 01:11:59.519021 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:11:59.724748 kubelet[2860]: E0114 01:11:59.722967 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:11:59.747181 kubelet[2860]: I0114 01:11:59.745085 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-566fc88784-v2ndt" podStartSLOduration=2.373002575 podStartE2EDuration="4.745066849s" podCreationTimestamp="2026-01-14 01:11:55 +0000 UTC" firstStartedPulling="2026-01-14 01:11:56.719426846 +0000 UTC m=+24.468427866" lastFinishedPulling="2026-01-14 01:11:59.091491117 +0000 UTC m=+26.840492140" observedRunningTime="2026-01-14 01:11:59.744864961 +0000 UTC m=+27.493866003" watchObservedRunningTime="2026-01-14 01:11:59.745066849 +0000 UTC m=+27.494067912" Jan 14 01:11:59.820692 kubelet[2860]: E0114 01:11:59.820615 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.820692 kubelet[2860]: W0114 01:11:59.820641 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.821136 kubelet[2860]: E0114 01:11:59.820973 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.821603 kubelet[2860]: E0114 01:11:59.821456 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.821603 kubelet[2860]: W0114 01:11:59.821476 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.821603 kubelet[2860]: E0114 01:11:59.821491 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.822158 kubelet[2860]: E0114 01:11:59.822040 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.822158 kubelet[2860]: W0114 01:11:59.822074 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.822158 kubelet[2860]: E0114 01:11:59.822089 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.822627 kubelet[2860]: E0114 01:11:59.822614 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.822834 kubelet[2860]: W0114 01:11:59.822685 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.822834 kubelet[2860]: E0114 01:11:59.822699 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.822974 kubelet[2860]: E0114 01:11:59.822964 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.823050 kubelet[2860]: W0114 01:11:59.823037 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.823178 kubelet[2860]: E0114 01:11:59.823091 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.823419 kubelet[2860]: E0114 01:11:59.823406 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.823498 kubelet[2860]: W0114 01:11:59.823488 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.823556 kubelet[2860]: E0114 01:11:59.823547 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.823885 kubelet[2860]: E0114 01:11:59.823874 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.823980 kubelet[2860]: W0114 01:11:59.823969 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.824117 kubelet[2860]: E0114 01:11:59.824040 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.824297 kubelet[2860]: E0114 01:11:59.824286 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.824473 kubelet[2860]: W0114 01:11:59.824320 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.824473 kubelet[2860]: E0114 01:11:59.824332 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.824623 kubelet[2860]: E0114 01:11:59.824584 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.824623 kubelet[2860]: W0114 01:11:59.824594 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.824623 kubelet[2860]: E0114 01:11:59.824603 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.824983 kubelet[2860]: E0114 01:11:59.824944 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.824983 kubelet[2860]: W0114 01:11:59.824955 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.824983 kubelet[2860]: E0114 01:11:59.824967 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.825350 kubelet[2860]: E0114 01:11:59.825311 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.825525 kubelet[2860]: W0114 01:11:59.825457 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.825525 kubelet[2860]: E0114 01:11:59.825481 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.826030 kubelet[2860]: E0114 01:11:59.825909 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.826030 kubelet[2860]: W0114 01:11:59.825921 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.826030 kubelet[2860]: E0114 01:11:59.825939 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.826215 kubelet[2860]: E0114 01:11:59.826205 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.826285 kubelet[2860]: W0114 01:11:59.826275 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.826439 kubelet[2860]: E0114 01:11:59.826331 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.826543 kubelet[2860]: E0114 01:11:59.826534 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.826601 kubelet[2860]: W0114 01:11:59.826592 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.826689 kubelet[2860]: E0114 01:11:59.826659 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.826954 kubelet[2860]: E0114 01:11:59.826909 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.826954 kubelet[2860]: W0114 01:11:59.826919 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.826954 kubelet[2860]: E0114 01:11:59.826928 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.833451 kubelet[2860]: E0114 01:11:59.833416 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.833451 kubelet[2860]: W0114 01:11:59.833441 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.833451 kubelet[2860]: E0114 01:11:59.833464 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.834049 kubelet[2860]: E0114 01:11:59.833735 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.834049 kubelet[2860]: W0114 01:11:59.833746 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.834049 kubelet[2860]: E0114 01:11:59.833759 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.834446 kubelet[2860]: E0114 01:11:59.834324 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.834446 kubelet[2860]: W0114 01:11:59.834344 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.834446 kubelet[2860]: E0114 01:11:59.834361 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.834982 kubelet[2860]: E0114 01:11:59.834912 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.834982 kubelet[2860]: W0114 01:11:59.834932 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.834982 kubelet[2860]: E0114 01:11:59.834950 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.835348 kubelet[2860]: E0114 01:11:59.835328 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.835398 kubelet[2860]: W0114 01:11:59.835350 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.835398 kubelet[2860]: E0114 01:11:59.835368 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.835701 kubelet[2860]: E0114 01:11:59.835685 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.835764 kubelet[2860]: W0114 01:11:59.835702 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.835764 kubelet[2860]: E0114 01:11:59.835717 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.836005 kubelet[2860]: E0114 01:11:59.835988 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.836105 kubelet[2860]: W0114 01:11:59.836006 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.836105 kubelet[2860]: E0114 01:11:59.836023 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.836295 kubelet[2860]: E0114 01:11:59.836276 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.836295 kubelet[2860]: W0114 01:11:59.836292 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.836463 kubelet[2860]: E0114 01:11:59.836306 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.836463 kubelet[2860]: E0114 01:11:59.836578 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.836463 kubelet[2860]: W0114 01:11:59.836592 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.836463 kubelet[2860]: E0114 01:11:59.836609 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.837719 kubelet[2860]: E0114 01:11:59.837050 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.837719 kubelet[2860]: W0114 01:11:59.837062 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.837719 kubelet[2860]: E0114 01:11:59.837073 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.838210 kubelet[2860]: E0114 01:11:59.838135 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.838210 kubelet[2860]: W0114 01:11:59.838149 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.838210 kubelet[2860]: E0114 01:11:59.838161 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.839829 kubelet[2860]: E0114 01:11:59.839738 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.839829 kubelet[2860]: W0114 01:11:59.839756 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.839829 kubelet[2860]: E0114 01:11:59.839770 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.840369 kubelet[2860]: E0114 01:11:59.840269 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.840369 kubelet[2860]: W0114 01:11:59.840283 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.840369 kubelet[2860]: E0114 01:11:59.840294 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.840865 kubelet[2860]: E0114 01:11:59.840826 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.840941 kubelet[2860]: W0114 01:11:59.840928 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.841047 kubelet[2860]: E0114 01:11:59.840992 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.841493 kubelet[2860]: E0114 01:11:59.841450 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.841493 kubelet[2860]: W0114 01:11:59.841464 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.841493 kubelet[2860]: E0114 01:11:59.841477 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.842657 kubelet[2860]: E0114 01:11:59.842602 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.842657 kubelet[2860]: W0114 01:11:59.842624 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.842657 kubelet[2860]: E0114 01:11:59.842639 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.843717 kubelet[2860]: E0114 01:11:59.843585 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.843717 kubelet[2860]: W0114 01:11:59.843599 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.843717 kubelet[2860]: E0114 01:11:59.843610 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:11:59.843866 kubelet[2860]: E0114 01:11:59.843825 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:11:59.843866 kubelet[2860]: W0114 01:11:59.843833 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:11:59.843866 kubelet[2860]: E0114 01:11:59.843841 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.445770 containerd[1591]: time="2026-01-14T01:12:00.445431089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:00.448454 containerd[1591]: time="2026-01-14T01:12:00.448374967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:00.450025 containerd[1591]: time="2026-01-14T01:12:00.449926701Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:00.454860 containerd[1591]: time="2026-01-14T01:12:00.454759000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:00.461695 containerd[1591]: time="2026-01-14T01:12:00.461519979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.368244089s" Jan 14 01:12:00.461695 containerd[1591]: time="2026-01-14T01:12:00.461598827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:12:00.470850 containerd[1591]: time="2026-01-14T01:12:00.470782388Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:12:00.512654 containerd[1591]: time="2026-01-14T01:12:00.511943674Z" level=info msg="Container c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:00.530252 containerd[1591]: time="2026-01-14T01:12:00.530132988Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50\"" Jan 14 01:12:00.531845 containerd[1591]: time="2026-01-14T01:12:00.531570816Z" level=info msg="StartContainer for \"c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50\"" Jan 14 01:12:00.534881 containerd[1591]: time="2026-01-14T01:12:00.534825180Z" level=info msg="connecting to shim c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50" address="unix:///run/containerd/s/3f4299f8f0dd96965f86d7a735c7d98cb6de67e690939b2f189fca42cf92ac48" protocol=ttrpc version=3 Jan 14 01:12:00.575110 systemd[1]: Started cri-containerd-c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50.scope - libcontainer container c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50. Jan 14 01:12:00.733048 kubelet[2860]: E0114 01:12:00.732280 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:00.734695 kubelet[2860]: E0114 01:12:00.733921 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.734695 kubelet[2860]: W0114 01:12:00.734021 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.734695 kubelet[2860]: E0114 01:12:00.734079 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.735677 kubelet[2860]: E0114 01:12:00.735582 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.735677 kubelet[2860]: W0114 01:12:00.735615 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.735677 kubelet[2860]: E0114 01:12:00.735642 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.738131 kubelet[2860]: E0114 01:12:00.738092 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.738587 kubelet[2860]: W0114 01:12:00.738115 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.738587 kubelet[2860]: E0114 01:12:00.738321 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.740311 kubelet[2860]: E0114 01:12:00.740279 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.740676 kubelet[2860]: W0114 01:12:00.740495 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.740676 kubelet[2860]: E0114 01:12:00.740525 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.741175 kubelet[2860]: E0114 01:12:00.741133 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.741175 kubelet[2860]: W0114 01:12:00.741148 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.741359 kubelet[2860]: E0114 01:12:00.741161 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.741770 kubelet[2860]: E0114 01:12:00.741658 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.741770 kubelet[2860]: W0114 01:12:00.741713 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.741770 kubelet[2860]: E0114 01:12:00.741730 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.744133 kubelet[2860]: E0114 01:12:00.743977 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.744133 kubelet[2860]: W0114 01:12:00.743997 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.744133 kubelet[2860]: E0114 01:12:00.744013 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.745843 kubelet[2860]: E0114 01:12:00.745795 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.745843 kubelet[2860]: W0114 01:12:00.745815 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.746312 kubelet[2860]: E0114 01:12:00.746064 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.746643 kubelet[2860]: E0114 01:12:00.746623 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.746848 kubelet[2860]: W0114 01:12:00.746760 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.746848 kubelet[2860]: E0114 01:12:00.746792 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.747393 kubelet[2860]: E0114 01:12:00.747258 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.747393 kubelet[2860]: W0114 01:12:00.747274 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.747393 kubelet[2860]: E0114 01:12:00.747287 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.747853 kubelet[2860]: E0114 01:12:00.747676 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.747853 kubelet[2860]: W0114 01:12:00.747699 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.747853 kubelet[2860]: E0114 01:12:00.747711 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.748161 kubelet[2860]: E0114 01:12:00.748148 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.748275 kubelet[2860]: W0114 01:12:00.748235 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.748275 kubelet[2860]: E0114 01:12:00.748251 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.749217 kubelet[2860]: E0114 01:12:00.749091 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.749217 kubelet[2860]: W0114 01:12:00.749107 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.749217 kubelet[2860]: E0114 01:12:00.749120 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.749792 kubelet[2860]: E0114 01:12:00.749776 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.749874 kubelet[2860]: W0114 01:12:00.749864 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.749928 kubelet[2860]: E0114 01:12:00.749919 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.750696 kubelet[2860]: E0114 01:12:00.750582 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.750696 kubelet[2860]: W0114 01:12:00.750598 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.750696 kubelet[2860]: E0114 01:12:00.750610 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.752129 kubelet[2860]: E0114 01:12:00.752113 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.752289 kubelet[2860]: W0114 01:12:00.752201 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.752289 kubelet[2860]: E0114 01:12:00.752216 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.752578 kubelet[2860]: E0114 01:12:00.752566 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.752738 kubelet[2860]: W0114 01:12:00.752636 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.752738 kubelet[2860]: E0114 01:12:00.752649 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.753919 kubelet[2860]: E0114 01:12:00.753878 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.753919 kubelet[2860]: W0114 01:12:00.753904 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.754068 kubelet[2860]: E0114 01:12:00.753926 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.754274 kubelet[2860]: E0114 01:12:00.754251 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.754274 kubelet[2860]: W0114 01:12:00.754275 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.754746 kubelet[2860]: E0114 01:12:00.754293 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.754746 kubelet[2860]: E0114 01:12:00.754551 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.754746 kubelet[2860]: W0114 01:12:00.754565 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.754746 kubelet[2860]: E0114 01:12:00.754581 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.755377 kubelet[2860]: E0114 01:12:00.755355 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.755377 kubelet[2860]: W0114 01:12:00.755378 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.755696 kubelet[2860]: E0114 01:12:00.755396 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.755745 kubelet[2860]: E0114 01:12:00.755716 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.755775 kubelet[2860]: W0114 01:12:00.755750 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.755802 kubelet[2860]: E0114 01:12:00.755767 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.756891 kubelet[2860]: E0114 01:12:00.756867 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.756891 kubelet[2860]: W0114 01:12:00.756888 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.757585 kubelet[2860]: E0114 01:12:00.756907 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.757585 kubelet[2860]: E0114 01:12:00.757187 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.757585 kubelet[2860]: W0114 01:12:00.757201 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.757585 kubelet[2860]: E0114 01:12:00.757218 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.758209 kubelet[2860]: E0114 01:12:00.757785 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.758209 kubelet[2860]: W0114 01:12:00.757800 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.758209 kubelet[2860]: E0114 01:12:00.757818 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.758458 kubelet[2860]: E0114 01:12:00.758424 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.759742 kubelet[2860]: W0114 01:12:00.759718 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.759921 kubelet[2860]: E0114 01:12:00.759837 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.760228 kubelet[2860]: E0114 01:12:00.760154 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.760228 kubelet[2860]: W0114 01:12:00.760167 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.760228 kubelet[2860]: E0114 01:12:00.760179 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.760595 kubelet[2860]: E0114 01:12:00.760524 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.760595 kubelet[2860]: W0114 01:12:00.760536 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.760595 kubelet[2860]: E0114 01:12:00.760550 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.760935 kubelet[2860]: E0114 01:12:00.760922 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.761104 kubelet[2860]: W0114 01:12:00.760990 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.761104 kubelet[2860]: E0114 01:12:00.761006 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.761406 kubelet[2860]: E0114 01:12:00.761389 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.761497 kubelet[2860]: W0114 01:12:00.761485 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.761553 kubelet[2860]: E0114 01:12:00.761543 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.762058 kubelet[2860]: E0114 01:12:00.762034 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.762125 kubelet[2860]: W0114 01:12:00.762058 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.762125 kubelet[2860]: E0114 01:12:00.762081 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.762880 kubelet[2860]: E0114 01:12:00.762813 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.762880 kubelet[2860]: W0114 01:12:00.762830 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.762880 kubelet[2860]: E0114 01:12:00.762844 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.763882 kubelet[2860]: E0114 01:12:00.763857 2860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:00.763882 kubelet[2860]: W0114 01:12:00.763881 2860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:00.764044 kubelet[2860]: E0114 01:12:00.763909 2860 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:00.851000 audit[3597]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:00.851000 audit[3597]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb9f1c9a0 a2=0 a3=7fffb9f1c98c items=0 ppid=2970 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:00.856000 audit[3597]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:00.856000 audit[3597]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb9f1c9a0 a2=0 a3=7fffb9f1c98c items=0 ppid=2970 pid=3597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:00.863000 audit: BPF prog-id=164 op=LOAD Jan 14 01:12:00.863000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3376 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332393263356531386164653831633364623739323930396631666466 Jan 14 01:12:00.863000 audit: BPF prog-id=165 op=LOAD Jan 14 01:12:00.863000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3376 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332393263356531386164653831633364623739323930396631666466 Jan 14 01:12:00.864000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:12:00.864000 audit[3540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332393263356531386164653831633364623739323930396631666466 Jan 14 01:12:00.864000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:12:00.864000 audit[3540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332393263356531386164653831633364623739323930396631666466 Jan 14 01:12:00.864000 audit: BPF prog-id=166 op=LOAD Jan 14 01:12:00.864000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3376 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:00.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332393263356531386164653831633364623739323930396631666466 Jan 14 01:12:00.898906 containerd[1591]: time="2026-01-14T01:12:00.898659012Z" level=info msg="StartContainer for \"c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50\" returns successfully" Jan 14 01:12:00.922346 systemd[1]: cri-containerd-c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50.scope: Deactivated successfully. Jan 14 01:12:00.925000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:12:00.957964 containerd[1591]: time="2026-01-14T01:12:00.957896725Z" level=info msg="received container exit event container_id:\"c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50\" id:\"c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50\" pid:3554 exited_at:{seconds:1768353120 nanos:929073832}" Jan 14 01:12:01.002544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c292c5e18ade81c3db792909f1fdf030279801308232744987666a8284967b50-rootfs.mount: Deactivated successfully. Jan 14 01:12:01.518840 kubelet[2860]: E0114 01:12:01.518753 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:01.740568 kubelet[2860]: E0114 01:12:01.740525 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:01.741543 kubelet[2860]: E0114 01:12:01.741187 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:01.744245 containerd[1591]: time="2026-01-14T01:12:01.744188494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:12:03.519176 kubelet[2860]: E0114 01:12:03.518982 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:05.083657 containerd[1591]: time="2026-01-14T01:12:05.083582599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:05.086514 containerd[1591]: time="2026-01-14T01:12:05.086116630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:12:05.088687 containerd[1591]: time="2026-01-14T01:12:05.088600490Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:05.093998 containerd[1591]: time="2026-01-14T01:12:05.093916087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:05.127410 containerd[1591]: time="2026-01-14T01:12:05.102219076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.357943889s" Jan 14 01:12:05.127410 containerd[1591]: time="2026-01-14T01:12:05.127382825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:12:05.136354 containerd[1591]: time="2026-01-14T01:12:05.136293122Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:12:05.150948 containerd[1591]: time="2026-01-14T01:12:05.150882382Z" level=info msg="Container 81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:05.167460 containerd[1591]: time="2026-01-14T01:12:05.167389370Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8\"" Jan 14 01:12:05.169866 containerd[1591]: time="2026-01-14T01:12:05.168772450Z" level=info msg="StartContainer for \"81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8\"" Jan 14 01:12:05.173129 containerd[1591]: time="2026-01-14T01:12:05.173077876Z" level=info msg="connecting to shim 81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8" address="unix:///run/containerd/s/3f4299f8f0dd96965f86d7a735c7d98cb6de67e690939b2f189fca42cf92ac48" protocol=ttrpc version=3 Jan 14 01:12:05.217089 systemd[1]: Started cri-containerd-81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8.scope - libcontainer container 81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8. Jan 14 01:12:05.287000 audit: BPF prog-id=167 op=LOAD Jan 14 01:12:05.290043 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:12:05.290180 kernel: audit: type=1334 audit(1768353125.287:569): prog-id=167 op=LOAD Jan 14 01:12:05.287000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.294418 kernel: audit: type=1300 audit(1768353125.287:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.308777 kernel: audit: type=1327 audit(1768353125.287:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.312912 kernel: audit: type=1334 audit(1768353125.287:570): prog-id=168 op=LOAD Jan 14 01:12:05.287000 audit: BPF prog-id=168 op=LOAD Jan 14 01:12:05.287000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.325188 kernel: audit: type=1300 audit(1768353125.287:570): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.325378 kernel: audit: type=1327 audit(1768353125.287:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.287000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:12:05.331380 kernel: audit: type=1334 audit(1768353125.287:571): prog-id=168 op=UNLOAD Jan 14 01:12:05.287000 audit[3638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.335181 kernel: audit: type=1300 audit(1768353125.287:571): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.342842 kernel: audit: type=1327 audit(1768353125.287:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.287000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:12:05.287000 audit[3638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.355143 kernel: audit: type=1334 audit(1768353125.287:572): prog-id=167 op=UNLOAD Jan 14 01:12:05.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.287000 audit: BPF prog-id=169 op=LOAD Jan 14 01:12:05.287000 audit[3638]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3376 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:05.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831303238323330393931623331386630353939373033393666376237 Jan 14 01:12:05.364785 containerd[1591]: time="2026-01-14T01:12:05.363038950Z" level=info msg="StartContainer for \"81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8\" returns successfully" Jan 14 01:12:05.518342 kubelet[2860]: E0114 01:12:05.518256 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:05.766226 kubelet[2860]: E0114 01:12:05.766169 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:06.140703 systemd[1]: cri-containerd-81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8.scope: Deactivated successfully. Jan 14 01:12:06.141042 systemd[1]: cri-containerd-81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8.scope: Consumed 730ms CPU time, 168.7M memory peak, 10.7M read from disk, 171.3M written to disk. Jan 14 01:12:06.143907 containerd[1591]: time="2026-01-14T01:12:06.143443413Z" level=info msg="received container exit event container_id:\"81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8\" id:\"81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8\" pid:3651 exited_at:{seconds:1768353126 nanos:142533721}" Jan 14 01:12:06.144000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:12:06.214882 kubelet[2860]: I0114 01:12:06.214556 2860 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 01:12:06.301245 kubelet[2860]: I0114 01:12:06.300391 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4n7\" (UniqueName: \"kubernetes.io/projected/466c1094-6e4a-41dc-8cce-7e44fc6de59f-kube-api-access-2g4n7\") pod \"coredns-66bc5c9577-p7mjs\" (UID: \"466c1094-6e4a-41dc-8cce-7e44fc6de59f\") " pod="kube-system/coredns-66bc5c9577-p7mjs" Jan 14 01:12:06.301987 kubelet[2860]: I0114 01:12:06.301788 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/466c1094-6e4a-41dc-8cce-7e44fc6de59f-config-volume\") pod \"coredns-66bc5c9577-p7mjs\" (UID: \"466c1094-6e4a-41dc-8cce-7e44fc6de59f\") " pod="kube-system/coredns-66bc5c9577-p7mjs" Jan 14 01:12:06.307063 kubelet[2860]: I0114 01:12:06.306542 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c-config-volume\") pod \"coredns-66bc5c9577-ns47v\" (UID: \"6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c\") " pod="kube-system/coredns-66bc5c9577-ns47v" Jan 14 01:12:06.307063 kubelet[2860]: I0114 01:12:06.306625 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvls2\" (UniqueName: \"kubernetes.io/projected/6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c-kube-api-access-cvls2\") pod \"coredns-66bc5c9577-ns47v\" (UID: \"6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c\") " pod="kube-system/coredns-66bc5c9577-ns47v" Jan 14 01:12:06.322540 systemd[1]: Created slice kubepods-burstable-pod6bdb99b4_4ad9_4667_bc93_4c5cc3fb867c.slice - libcontainer container kubepods-burstable-pod6bdb99b4_4ad9_4667_bc93_4c5cc3fb867c.slice. Jan 14 01:12:06.365459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81028230991b318f059970396f7b7a542ce07b80e5581fa55fb50237a07f8ac8-rootfs.mount: Deactivated successfully. Jan 14 01:12:06.374151 systemd[1]: Created slice kubepods-burstable-pod466c1094_6e4a_41dc_8cce_7e44fc6de59f.slice - libcontainer container kubepods-burstable-pod466c1094_6e4a_41dc_8cce_7e44fc6de59f.slice. Jan 14 01:12:06.432625 systemd[1]: Created slice kubepods-besteffort-pod7517aecc_466a_4034_a381_03ea5ddb0673.slice - libcontainer container kubepods-besteffort-pod7517aecc_466a_4034_a381_03ea5ddb0673.slice. Jan 14 01:12:06.460139 systemd[1]: Created slice kubepods-besteffort-pod2f5464eb_b23d_4db8_9c9f_50031c3acc8e.slice - libcontainer container kubepods-besteffort-pod2f5464eb_b23d_4db8_9c9f_50031c3acc8e.slice. Jan 14 01:12:06.496877 systemd[1]: Created slice kubepods-besteffort-pod374e8a11_39d9_48c5_bcf7_672988f566dc.slice - libcontainer container kubepods-besteffort-pod374e8a11_39d9_48c5_bcf7_672988f566dc.slice. Jan 14 01:12:06.515246 systemd[1]: Created slice kubepods-besteffort-podd9fd657d_8a4b_430b_8046_e1faf50ffec5.slice - libcontainer container kubepods-besteffort-podd9fd657d_8a4b_430b_8046_e1faf50ffec5.slice. Jan 14 01:12:06.518297 kubelet[2860]: I0114 01:12:06.516189 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xmf\" (UniqueName: \"kubernetes.io/projected/374e8a11-39d9-48c5-bcf7-672988f566dc-kube-api-access-n6xmf\") pod \"goldmane-7c778bb748-8r79p\" (UID: \"374e8a11-39d9-48c5-bcf7-672988f566dc\") " pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:06.518297 kubelet[2860]: I0114 01:12:06.516232 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-backend-key-pair\") pod \"whisker-669fdc4f6c-zvjn5\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " pod="calico-system/whisker-669fdc4f6c-zvjn5" Jan 14 01:12:06.518297 kubelet[2860]: I0114 01:12:06.516254 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zrx\" (UniqueName: \"kubernetes.io/projected/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-kube-api-access-z6zrx\") pod \"whisker-669fdc4f6c-zvjn5\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " pod="calico-system/whisker-669fdc4f6c-zvjn5" Jan 14 01:12:06.518297 kubelet[2860]: I0114 01:12:06.516294 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22kh\" (UniqueName: \"kubernetes.io/projected/7517aecc-466a-4034-a381-03ea5ddb0673-kube-api-access-p22kh\") pod \"calico-kube-controllers-557db9b6c8-cw62p\" (UID: \"7517aecc-466a-4034-a381-03ea5ddb0673\") " pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" Jan 14 01:12:06.518297 kubelet[2860]: I0114 01:12:06.516330 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/374e8a11-39d9-48c5-bcf7-672988f566dc-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8r79p\" (UID: \"374e8a11-39d9-48c5-bcf7-672988f566dc\") " pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:06.521742 kubelet[2860]: I0114 01:12:06.516364 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/374e8a11-39d9-48c5-bcf7-672988f566dc-goldmane-key-pair\") pod \"goldmane-7c778bb748-8r79p\" (UID: \"374e8a11-39d9-48c5-bcf7-672988f566dc\") " pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:06.521742 kubelet[2860]: I0114 01:12:06.516400 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374e8a11-39d9-48c5-bcf7-672988f566dc-config\") pod \"goldmane-7c778bb748-8r79p\" (UID: \"374e8a11-39d9-48c5-bcf7-672988f566dc\") " pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:06.521742 kubelet[2860]: I0114 01:12:06.516426 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7517aecc-466a-4034-a381-03ea5ddb0673-tigera-ca-bundle\") pod \"calico-kube-controllers-557db9b6c8-cw62p\" (UID: \"7517aecc-466a-4034-a381-03ea5ddb0673\") " pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" Jan 14 01:12:06.521742 kubelet[2860]: I0114 01:12:06.516453 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-ca-bundle\") pod \"whisker-669fdc4f6c-zvjn5\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " pod="calico-system/whisker-669fdc4f6c-zvjn5" Jan 14 01:12:06.537084 systemd[1]: Created slice kubepods-besteffort-podc98f05e3_1f7b_4c82_ac93_d4261901eb6e.slice - libcontainer container kubepods-besteffort-podc98f05e3_1f7b_4c82_ac93_d4261901eb6e.slice. Jan 14 01:12:06.617782 kubelet[2860]: I0114 01:12:06.617595 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltslq\" (UniqueName: \"kubernetes.io/projected/c98f05e3-1f7b-4c82-ac93-d4261901eb6e-kube-api-access-ltslq\") pod \"calico-apiserver-5dc87cd5bb-hhclj\" (UID: \"c98f05e3-1f7b-4c82-ac93-d4261901eb6e\") " pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" Jan 14 01:12:06.618523 kubelet[2860]: I0114 01:12:06.618493 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c98f05e3-1f7b-4c82-ac93-d4261901eb6e-calico-apiserver-certs\") pod \"calico-apiserver-5dc87cd5bb-hhclj\" (UID: \"c98f05e3-1f7b-4c82-ac93-d4261901eb6e\") " pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" Jan 14 01:12:06.619009 kubelet[2860]: I0114 01:12:06.618971 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d9fd657d-8a4b-430b-8046-e1faf50ffec5-calico-apiserver-certs\") pod \"calico-apiserver-5dc87cd5bb-56bxg\" (UID: \"d9fd657d-8a4b-430b-8046-e1faf50ffec5\") " pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" Jan 14 01:12:06.619337 kubelet[2860]: I0114 01:12:06.619272 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzbxh\" (UniqueName: \"kubernetes.io/projected/d9fd657d-8a4b-430b-8046-e1faf50ffec5-kube-api-access-mzbxh\") pod \"calico-apiserver-5dc87cd5bb-56bxg\" (UID: \"d9fd657d-8a4b-430b-8046-e1faf50ffec5\") " pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" Jan 14 01:12:06.657626 kubelet[2860]: E0114 01:12:06.657412 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:06.660095 containerd[1591]: time="2026-01-14T01:12:06.660010571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ns47v,Uid:6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:06.730365 kubelet[2860]: E0114 01:12:06.729896 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:06.738004 containerd[1591]: time="2026-01-14T01:12:06.737953724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p7mjs,Uid:466c1094-6e4a-41dc-8cce-7e44fc6de59f,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:06.759696 containerd[1591]: time="2026-01-14T01:12:06.758157157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557db9b6c8-cw62p,Uid:7517aecc-466a-4034-a381-03ea5ddb0673,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:06.785674 containerd[1591]: time="2026-01-14T01:12:06.781976390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669fdc4f6c-zvjn5,Uid:2f5464eb-b23d-4db8-9c9f-50031c3acc8e,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:06.809636 containerd[1591]: time="2026-01-14T01:12:06.809576395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r79p,Uid:374e8a11-39d9-48c5-bcf7-672988f566dc,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:06.810810 kubelet[2860]: E0114 01:12:06.810779 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:06.821181 containerd[1591]: time="2026-01-14T01:12:06.820801315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:12:06.853140 containerd[1591]: time="2026-01-14T01:12:06.853082834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-56bxg,Uid:d9fd657d-8a4b-430b-8046-e1faf50ffec5,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:06.865121 containerd[1591]: time="2026-01-14T01:12:06.865063813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-hhclj,Uid:c98f05e3-1f7b-4c82-ac93-d4261901eb6e,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:07.256035 containerd[1591]: time="2026-01-14T01:12:07.255973070Z" level=error msg="Failed to destroy network for sandbox \"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.257892 containerd[1591]: time="2026-01-14T01:12:07.257803037Z" level=error msg="Failed to destroy network for sandbox \"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.259885 containerd[1591]: time="2026-01-14T01:12:07.259799960Z" level=error msg="Failed to destroy network for sandbox \"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.261169 containerd[1591]: time="2026-01-14T01:12:07.260888572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557db9b6c8-cw62p,Uid:7517aecc-466a-4034-a381-03ea5ddb0673,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.266243 kubelet[2860]: E0114 01:12:07.265949 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.266624 kubelet[2860]: E0114 01:12:07.266573 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" Jan 14 01:12:07.266909 kubelet[2860]: E0114 01:12:07.266760 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" Jan 14 01:12:07.267033 kubelet[2860]: E0114 01:12:07.266878 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-557db9b6c8-cw62p_calico-system(7517aecc-466a-4034-a381-03ea5ddb0673)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-557db9b6c8-cw62p_calico-system(7517aecc-466a-4034-a381-03ea5ddb0673)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edaca2f2b2afc1b15ac2347bd7cab4eea8396b6bb251f01d2adf30efd4c1171e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:07.268584 containerd[1591]: time="2026-01-14T01:12:07.268464431Z" level=error msg="Failed to destroy network for sandbox \"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.273687 containerd[1591]: time="2026-01-14T01:12:07.273580798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-56bxg,Uid:d9fd657d-8a4b-430b-8046-e1faf50ffec5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.278891 containerd[1591]: time="2026-01-14T01:12:07.274760280Z" level=error msg="Failed to destroy network for sandbox \"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.280639 kubelet[2860]: E0114 01:12:07.279620 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.280639 kubelet[2860]: E0114 01:12:07.279766 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" Jan 14 01:12:07.280639 kubelet[2860]: E0114 01:12:07.279798 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" Jan 14 01:12:07.282550 kubelet[2860]: E0114 01:12:07.279866 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc87cd5bb-56bxg_calico-apiserver(d9fd657d-8a4b-430b-8046-e1faf50ffec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc87cd5bb-56bxg_calico-apiserver(d9fd657d-8a4b-430b-8046-e1faf50ffec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd27f0a8323a57c99592f8b5ba6e26b820f0d7557389a09ebf714444747310d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:12:07.284521 containerd[1591]: time="2026-01-14T01:12:07.284343280Z" level=error msg="Failed to destroy network for sandbox \"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.293608 containerd[1591]: time="2026-01-14T01:12:07.293546792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-669fdc4f6c-zvjn5,Uid:2f5464eb-b23d-4db8-9c9f-50031c3acc8e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.294975 kubelet[2860]: E0114 01:12:07.293972 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.294975 kubelet[2860]: E0114 01:12:07.294031 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-669fdc4f6c-zvjn5" Jan 14 01:12:07.294975 kubelet[2860]: E0114 01:12:07.294057 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-669fdc4f6c-zvjn5" Jan 14 01:12:07.295162 kubelet[2860]: E0114 01:12:07.294126 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-669fdc4f6c-zvjn5_calico-system(2f5464eb-b23d-4db8-9c9f-50031c3acc8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-669fdc4f6c-zvjn5_calico-system(2f5464eb-b23d-4db8-9c9f-50031c3acc8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11abbf7455ec6605200a127b68472c40a3b05a00c76eee99bffa1f4c85d77a49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-669fdc4f6c-zvjn5" podUID="2f5464eb-b23d-4db8-9c9f-50031c3acc8e" Jan 14 01:12:07.296549 containerd[1591]: time="2026-01-14T01:12:07.296483416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p7mjs,Uid:466c1094-6e4a-41dc-8cce-7e44fc6de59f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.298777 kubelet[2860]: E0114 01:12:07.298613 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.299060 kubelet[2860]: E0114 01:12:07.298923 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-p7mjs" Jan 14 01:12:07.299201 kubelet[2860]: E0114 01:12:07.299070 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-p7mjs" Jan 14 01:12:07.299342 kubelet[2860]: E0114 01:12:07.299290 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-p7mjs_kube-system(466c1094-6e4a-41dc-8cce-7e44fc6de59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-p7mjs_kube-system(466c1094-6e4a-41dc-8cce-7e44fc6de59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a0882f7030d3378ca700d0c4c0d7df0c9cbcefaf1ae818dd1b99d2969f137d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-p7mjs" podUID="466c1094-6e4a-41dc-8cce-7e44fc6de59f" Jan 14 01:12:07.312083 containerd[1591]: time="2026-01-14T01:12:07.312009373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ns47v,Uid:6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.312612 kubelet[2860]: E0114 01:12:07.312355 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.312612 kubelet[2860]: E0114 01:12:07.312425 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ns47v" Jan 14 01:12:07.312612 kubelet[2860]: E0114 01:12:07.312455 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ns47v" Jan 14 01:12:07.314877 kubelet[2860]: E0114 01:12:07.313426 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ns47v_kube-system(6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ns47v_kube-system(6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"866e5e4dd53022491723ec344cb498d8bbc192aad0fa98f7e2b81968d8dde288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ns47v" podUID="6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c" Jan 14 01:12:07.316245 kubelet[2860]: E0114 01:12:07.315511 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.316245 kubelet[2860]: E0114 01:12:07.315576 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" Jan 14 01:12:07.316245 kubelet[2860]: E0114 01:12:07.315600 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" Jan 14 01:12:07.316529 containerd[1591]: time="2026-01-14T01:12:07.315081140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-hhclj,Uid:c98f05e3-1f7b-4c82-ac93-d4261901eb6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.316646 kubelet[2860]: E0114 01:12:07.315675 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dc87cd5bb-hhclj_calico-apiserver(c98f05e3-1f7b-4c82-ac93-d4261901eb6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dc87cd5bb-hhclj_calico-apiserver(c98f05e3-1f7b-4c82-ac93-d4261901eb6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29f6b09fd7d3feac93f0996c8c567698a3b57dd42dff1e01fb18403f414b8f0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:12:07.317585 containerd[1591]: time="2026-01-14T01:12:07.317526265Z" level=error msg="Failed to destroy network for sandbox \"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.338182 containerd[1591]: time="2026-01-14T01:12:07.338095980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r79p,Uid:374e8a11-39d9-48c5-bcf7-672988f566dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.338603 kubelet[2860]: E0114 01:12:07.338555 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.339014 kubelet[2860]: E0114 01:12:07.338829 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:07.339014 kubelet[2860]: E0114 01:12:07.338874 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8r79p" Jan 14 01:12:07.339014 kubelet[2860]: E0114 01:12:07.338955 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8r79p_calico-system(374e8a11-39d9-48c5-bcf7-672988f566dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8r79p_calico-system(374e8a11-39d9-48c5-bcf7-672988f566dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f4216a82b3bae22c2f605d58ae11ba27ee1842307363467bdae1ccdc6aabaf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:12:07.528345 systemd[1]: Created slice kubepods-besteffort-pod9c51b2de_d8d9_4e42_af77_3bf2696395e2.slice - libcontainer container kubepods-besteffort-pod9c51b2de_d8d9_4e42_af77_3bf2696395e2.slice. Jan 14 01:12:07.538856 containerd[1591]: time="2026-01-14T01:12:07.538801743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-598dd,Uid:9c51b2de-d8d9-4e42-af77-3bf2696395e2,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:07.645551 containerd[1591]: time="2026-01-14T01:12:07.645451583Z" level=error msg="Failed to destroy network for sandbox \"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.651226 systemd[1]: run-netns-cni\x2d3709b4f0\x2d8337\x2d66f4\x2d8fb8\x2d94ac15084f76.mount: Deactivated successfully. Jan 14 01:12:07.651850 containerd[1591]: time="2026-01-14T01:12:07.651775191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-598dd,Uid:9c51b2de-d8d9-4e42-af77-3bf2696395e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.654352 kubelet[2860]: E0114 01:12:07.652107 2860 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:07.654352 kubelet[2860]: E0114 01:12:07.652174 2860 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-598dd" Jan 14 01:12:07.654352 kubelet[2860]: E0114 01:12:07.652251 2860 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-598dd" Jan 14 01:12:07.655041 kubelet[2860]: E0114 01:12:07.652328 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07366aa4c83ed0cde609fec40308856559ef2443dd865805b2e23fe978da4e61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:13.524427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683870970.mount: Deactivated successfully. Jan 14 01:12:13.635682 containerd[1591]: time="2026-01-14T01:12:13.626208555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:12:13.639486 containerd[1591]: time="2026-01-14T01:12:13.626016872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:13.657400 containerd[1591]: time="2026-01-14T01:12:13.657324178Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:13.658186 containerd[1591]: time="2026-01-14T01:12:13.658141937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:13.658872 containerd[1591]: time="2026-01-14T01:12:13.658800075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.837930147s" Jan 14 01:12:13.658872 containerd[1591]: time="2026-01-14T01:12:13.658844838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:12:13.693587 containerd[1591]: time="2026-01-14T01:12:13.693477036Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:12:13.762707 containerd[1591]: time="2026-01-14T01:12:13.762277755Z" level=info msg="Container 6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:13.769695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772018436.mount: Deactivated successfully. Jan 14 01:12:13.792430 containerd[1591]: time="2026-01-14T01:12:13.792243362Z" level=info msg="CreateContainer within sandbox \"75768ba8c64a72c73b8275fea5b1388285fe682c2f2bd1ce6914bffb87da2a77\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00\"" Jan 14 01:12:13.794217 containerd[1591]: time="2026-01-14T01:12:13.794173121Z" level=info msg="StartContainer for \"6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00\"" Jan 14 01:12:13.798985 containerd[1591]: time="2026-01-14T01:12:13.798937367Z" level=info msg="connecting to shim 6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00" address="unix:///run/containerd/s/3f4299f8f0dd96965f86d7a735c7d98cb6de67e690939b2f189fca42cf92ac48" protocol=ttrpc version=3 Jan 14 01:12:13.970361 systemd[1]: Started cri-containerd-6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00.scope - libcontainer container 6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00. Jan 14 01:12:14.060000 audit: BPF prog-id=170 op=LOAD Jan 14 01:12:14.072175 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:12:14.072313 kernel: audit: type=1334 audit(1768353134.060:575): prog-id=170 op=LOAD Jan 14 01:12:14.072371 kernel: audit: type=1300 audit(1768353134.060:575): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit[3907]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.081592 kernel: audit: type=1327 audit(1768353134.060:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.087776 kernel: audit: type=1334 audit(1768353134.060:576): prog-id=171 op=LOAD Jan 14 01:12:14.060000 audit: BPF prog-id=171 op=LOAD Jan 14 01:12:14.089818 kernel: audit: type=1300 audit(1768353134.060:576): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit[3907]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.060000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:12:14.113826 kernel: audit: type=1327 audit(1768353134.060:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.113925 kernel: audit: type=1334 audit(1768353134.060:577): prog-id=171 op=UNLOAD Jan 14 01:12:14.060000 audit[3907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.120198 kernel: audit: type=1300 audit(1768353134.060:577): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.126182 kernel: audit: type=1327 audit(1768353134.060:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.133071 kernel: audit: type=1334 audit(1768353134.060:578): prog-id=170 op=UNLOAD Jan 14 01:12:14.060000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:12:14.060000 audit[3907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.060000 audit: BPF prog-id=172 op=LOAD Jan 14 01:12:14.060000 audit[3907]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3376 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661363833366164343064656565656335616565623861303130313561 Jan 14 01:12:14.175920 containerd[1591]: time="2026-01-14T01:12:14.175754410Z" level=info msg="StartContainer for \"6a6836ad40deeeec5aeeb8a01015a70e3afa6ea141f81a847bfba01f94002a00\" returns successfully" Jan 14 01:12:14.319957 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:12:14.320136 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:12:14.694567 kubelet[2860]: I0114 01:12:14.694053 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zrx\" (UniqueName: \"kubernetes.io/projected/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-kube-api-access-z6zrx\") pod \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " Jan 14 01:12:14.694567 kubelet[2860]: I0114 01:12:14.694145 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-backend-key-pair\") pod \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " Jan 14 01:12:14.694567 kubelet[2860]: I0114 01:12:14.694189 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-ca-bundle\") pod \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\" (UID: \"2f5464eb-b23d-4db8-9c9f-50031c3acc8e\") " Jan 14 01:12:14.695899 kubelet[2860]: I0114 01:12:14.695340 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2f5464eb-b23d-4db8-9c9f-50031c3acc8e" (UID: "2f5464eb-b23d-4db8-9c9f-50031c3acc8e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:12:14.707014 kubelet[2860]: I0114 01:12:14.706907 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-kube-api-access-z6zrx" (OuterVolumeSpecName: "kube-api-access-z6zrx") pod "2f5464eb-b23d-4db8-9c9f-50031c3acc8e" (UID: "2f5464eb-b23d-4db8-9c9f-50031c3acc8e"). InnerVolumeSpecName "kube-api-access-z6zrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:12:14.712732 kubelet[2860]: I0114 01:12:14.711004 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2f5464eb-b23d-4db8-9c9f-50031c3acc8e" (UID: "2f5464eb-b23d-4db8-9c9f-50031c3acc8e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:12:14.713304 systemd[1]: var-lib-kubelet-pods-2f5464eb\x2db23d\x2d4db8\x2d9c9f\x2d50031c3acc8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6zrx.mount: Deactivated successfully. Jan 14 01:12:14.726235 systemd[1]: var-lib-kubelet-pods-2f5464eb\x2db23d\x2d4db8\x2d9c9f\x2d50031c3acc8e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:12:14.795626 kubelet[2860]: I0114 01:12:14.795564 2860 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6zrx\" (UniqueName: \"kubernetes.io/projected/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-kube-api-access-z6zrx\") on node \"ci-4578.0.0-p-c80f5dee3b\" DevicePath \"\"" Jan 14 01:12:14.795626 kubelet[2860]: I0114 01:12:14.795611 2860 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-backend-key-pair\") on node \"ci-4578.0.0-p-c80f5dee3b\" DevicePath \"\"" Jan 14 01:12:14.795626 kubelet[2860]: I0114 01:12:14.795627 2860 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f5464eb-b23d-4db8-9c9f-50031c3acc8e-whisker-ca-bundle\") on node \"ci-4578.0.0-p-c80f5dee3b\" DevicePath \"\"" Jan 14 01:12:14.867346 kubelet[2860]: E0114 01:12:14.867298 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:14.877390 systemd[1]: Removed slice kubepods-besteffort-pod2f5464eb_b23d_4db8_9c9f_50031c3acc8e.slice - libcontainer container kubepods-besteffort-pod2f5464eb_b23d_4db8_9c9f_50031c3acc8e.slice. Jan 14 01:12:14.910263 kubelet[2860]: I0114 01:12:14.910135 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-79xjv" podStartSLOduration=2.972312638 podStartE2EDuration="19.910080375s" podCreationTimestamp="2026-01-14 01:11:55 +0000 UTC" firstStartedPulling="2026-01-14 01:11:56.722208146 +0000 UTC m=+24.471209165" lastFinishedPulling="2026-01-14 01:12:13.65997588 +0000 UTC m=+41.408976902" observedRunningTime="2026-01-14 01:12:14.906911115 +0000 UTC m=+42.655912160" watchObservedRunningTime="2026-01-14 01:12:14.910080375 +0000 UTC m=+42.659081430" Jan 14 01:12:14.997871 kubelet[2860]: I0114 01:12:14.996848 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a73596b3-7664-4579-a0ee-92d4f6e8b5c2-whisker-ca-bundle\") pod \"whisker-69bb8b8b79-mhzjk\" (UID: \"a73596b3-7664-4579-a0ee-92d4f6e8b5c2\") " pod="calico-system/whisker-69bb8b8b79-mhzjk" Jan 14 01:12:14.997871 kubelet[2860]: I0114 01:12:14.996892 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj8xx\" (UniqueName: \"kubernetes.io/projected/a73596b3-7664-4579-a0ee-92d4f6e8b5c2-kube-api-access-lj8xx\") pod \"whisker-69bb8b8b79-mhzjk\" (UID: \"a73596b3-7664-4579-a0ee-92d4f6e8b5c2\") " pod="calico-system/whisker-69bb8b8b79-mhzjk" Jan 14 01:12:14.997871 kubelet[2860]: I0114 01:12:14.996914 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a73596b3-7664-4579-a0ee-92d4f6e8b5c2-whisker-backend-key-pair\") pod \"whisker-69bb8b8b79-mhzjk\" (UID: \"a73596b3-7664-4579-a0ee-92d4f6e8b5c2\") " pod="calico-system/whisker-69bb8b8b79-mhzjk" Jan 14 01:12:15.009474 systemd[1]: Created slice kubepods-besteffort-poda73596b3_7664_4579_a0ee_92d4f6e8b5c2.slice - libcontainer container kubepods-besteffort-poda73596b3_7664_4579_a0ee_92d4f6e8b5c2.slice. Jan 14 01:12:15.325299 containerd[1591]: time="2026-01-14T01:12:15.324732571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bb8b8b79-mhzjk,Uid:a73596b3-7664-4579-a0ee-92d4f6e8b5c2,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:15.713149 systemd-networkd[1500]: cali2b1ff129ba5: Link UP Jan 14 01:12:15.718725 systemd-networkd[1500]: cali2b1ff129ba5: Gained carrier Jan 14 01:12:15.743684 containerd[1591]: 2026-01-14 01:12:15.371 [INFO][3972] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:12:15.743684 containerd[1591]: 2026-01-14 01:12:15.407 [INFO][3972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0 whisker-69bb8b8b79- calico-system a73596b3-7664-4579-a0ee-92d4f6e8b5c2 982 0 2026-01-14 01:12:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69bb8b8b79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b whisker-69bb8b8b79-mhzjk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2b1ff129ba5 [] [] }} ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-" Jan 14 01:12:15.743684 containerd[1591]: 2026-01-14 01:12:15.407 [INFO][3972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.743684 containerd[1591]: 2026-01-14 01:12:15.603 [INFO][3983] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" HandleID="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.604 [INFO][3983] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" HandleID="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003361d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"whisker-69bb8b8b79-mhzjk", "timestamp":"2026-01-14 01:12:15.603000844 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.605 [INFO][3983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.605 [INFO][3983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.605 [INFO][3983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.626 [INFO][3983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.660 [INFO][3983] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.667 [INFO][3983] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.670 [INFO][3983] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.744703 containerd[1591]: 2026-01-14 01:12:15.674 [INFO][3983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.674 [INFO][3983] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.676 [INFO][3983] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0 Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.684 [INFO][3983] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.691 [INFO][3983] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.65/26] block=192.168.60.64/26 handle="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.691 [INFO][3983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.65/26] handle="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.691 [INFO][3983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:15.745457 containerd[1591]: 2026-01-14 01:12:15.691 [INFO][3983] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.65/26] IPv6=[] ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" HandleID="k8s-pod-network.62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.746588 containerd[1591]: 2026-01-14 01:12:15.695 [INFO][3972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0", GenerateName:"whisker-69bb8b8b79-", Namespace:"calico-system", SelfLink:"", UID:"a73596b3-7664-4579-a0ee-92d4f6e8b5c2", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69bb8b8b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"whisker-69bb8b8b79-mhzjk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b1ff129ba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:15.746588 containerd[1591]: 2026-01-14 01:12:15.695 [INFO][3972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.65/32] ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.747224 containerd[1591]: 2026-01-14 01:12:15.696 [INFO][3972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b1ff129ba5 ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.747224 containerd[1591]: 2026-01-14 01:12:15.719 [INFO][3972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.747637 containerd[1591]: 2026-01-14 01:12:15.719 [INFO][3972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0", GenerateName:"whisker-69bb8b8b79-", Namespace:"calico-system", SelfLink:"", UID:"a73596b3-7664-4579-a0ee-92d4f6e8b5c2", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69bb8b8b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0", Pod:"whisker-69bb8b8b79-mhzjk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b1ff129ba5", MAC:"9e:8a:f9:6d:dc:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:15.749410 containerd[1591]: 2026-01-14 01:12:15.733 [INFO][3972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" Namespace="calico-system" Pod="whisker-69bb8b8b79-mhzjk" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-whisker--69bb8b8b79--mhzjk-eth0" Jan 14 01:12:15.872559 kubelet[2860]: I0114 01:12:15.871112 2860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:12:15.874646 kubelet[2860]: E0114 01:12:15.873404 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:15.981537 containerd[1591]: time="2026-01-14T01:12:15.980108502Z" level=info msg="connecting to shim 62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0" address="unix:///run/containerd/s/c0d7b36c66e41b0fd804e2d1be45864f05d821bbaa8e6bb13ddf54ddefdea246" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:16.033281 systemd[1]: Started cri-containerd-62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0.scope - libcontainer container 62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0. Jan 14 01:12:16.057000 audit: BPF prog-id=173 op=LOAD Jan 14 01:12:16.058000 audit: BPF prog-id=174 op=LOAD Jan 14 01:12:16.058000 audit[4016]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.058000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:12:16.058000 audit[4016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.058000 audit: BPF prog-id=175 op=LOAD Jan 14 01:12:16.058000 audit[4016]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.059000 audit: BPF prog-id=176 op=LOAD Jan 14 01:12:16.059000 audit[4016]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.059000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:12:16.059000 audit[4016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.059000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:12:16.059000 audit[4016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.059000 audit: BPF prog-id=177 op=LOAD Jan 14 01:12:16.059000 audit[4016]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4005 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632313732653164393434663738323438363832336430323232343339 Jan 14 01:12:16.169151 containerd[1591]: time="2026-01-14T01:12:16.169072718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bb8b8b79-mhzjk,Uid:a73596b3-7664-4579-a0ee-92d4f6e8b5c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"62172e1d944f782486823d0222439ac5a0ee02e654c4ea99089c17ba8011a6a0\"" Jan 14 01:12:16.173228 containerd[1591]: time="2026-01-14T01:12:16.173013275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:16.534126 kubelet[2860]: I0114 01:12:16.534033 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5464eb-b23d-4db8-9c9f-50031c3acc8e" path="/var/lib/kubelet/pods/2f5464eb-b23d-4db8-9c9f-50031c3acc8e/volumes" Jan 14 01:12:16.560258 containerd[1591]: time="2026-01-14T01:12:16.560086942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:16.564509 containerd[1591]: time="2026-01-14T01:12:16.564080724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:16.572518 containerd[1591]: time="2026-01-14T01:12:16.564018306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:16.573819 kubelet[2860]: E0114 01:12:16.573769 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:16.578747 kubelet[2860]: E0114 01:12:16.578644 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:16.579099 kubelet[2860]: E0114 01:12:16.578976 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:16.581178 containerd[1591]: time="2026-01-14T01:12:16.580858499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:16.814955 systemd-networkd[1500]: cali2b1ff129ba5: Gained IPv6LL Jan 14 01:12:16.830000 audit: BPF prog-id=178 op=LOAD Jan 14 01:12:16.830000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaf2a1e30 a2=98 a3=1fffffffffffffff items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.830000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.830000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:12:16.830000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcaf2a1e00 a3=0 items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.830000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.831000 audit: BPF prog-id=179 op=LOAD Jan 14 01:12:16.831000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaf2a1d10 a2=94 a3=3 items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.832000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:12:16.832000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcaf2a1d10 a2=94 a3=3 items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.832000 audit: BPF prog-id=180 op=LOAD Jan 14 01:12:16.832000 audit[4164]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaf2a1d50 a2=94 a3=7ffcaf2a1f30 items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.832000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:12:16.832000 audit[4164]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcaf2a1d50 a2=94 a3=7ffcaf2a1f30 items=0 ppid=4064 pid=4164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:16.838000 audit: BPF prog-id=181 op=LOAD Jan 14 01:12:16.838000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde7f5a900 a2=98 a3=3 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.838000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:16.838000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:12:16.838000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde7f5a8d0 a3=0 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.838000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:16.839000 audit: BPF prog-id=182 op=LOAD Jan 14 01:12:16.839000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde7f5a6f0 a2=94 a3=54428f items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.839000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:16.839000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:12:16.839000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde7f5a6f0 a2=94 a3=54428f items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.839000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:16.839000 audit: BPF prog-id=183 op=LOAD Jan 14 01:12:16.839000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde7f5a720 a2=94 a3=2 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.839000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:16.839000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:12:16.839000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde7f5a720 a2=0 a3=2 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:16.839000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.107000 audit: BPF prog-id=184 op=LOAD Jan 14 01:12:17.107000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde7f5a5e0 a2=94 a3=1 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.107000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:12:17.107000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde7f5a5e0 a2=94 a3=1 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.128879 containerd[1591]: time="2026-01-14T01:12:17.128632631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:17.131104 containerd[1591]: time="2026-01-14T01:12:17.130931852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:17.131104 containerd[1591]: time="2026-01-14T01:12:17.130940683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:17.131611 kubelet[2860]: E0114 01:12:17.131536 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:17.132291 kubelet[2860]: E0114 01:12:17.131615 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:17.132291 kubelet[2860]: E0114 01:12:17.131769 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:17.132291 kubelet[2860]: E0114 01:12:17.131835 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:12:17.134000 audit: BPF prog-id=185 op=LOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde7f5a5d0 a2=94 a3=4 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.134000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde7f5a5d0 a2=0 a3=4 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.134000 audit: BPF prog-id=186 op=LOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde7f5a430 a2=94 a3=5 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.134000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde7f5a430 a2=0 a3=5 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.134000 audit: BPF prog-id=187 op=LOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde7f5a650 a2=94 a3=6 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.134000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:12:17.134000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde7f5a650 a2=0 a3=6 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.134000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.135000 audit: BPF prog-id=188 op=LOAD Jan 14 01:12:17.135000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde7f59e00 a2=94 a3=88 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.135000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.137000 audit: BPF prog-id=189 op=LOAD Jan 14 01:12:17.137000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde7f59c80 a2=94 a3=2 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.137000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:12:17.137000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde7f59cb0 a2=0 a3=7ffde7f59db0 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.138000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:12:17.138000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1987cd10 a2=0 a3=22952fd47a027f52 items=0 ppid=4064 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.138000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:17.154000 audit: BPF prog-id=190 op=LOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda8bfa5e0 a2=98 a3=1999999999999999 items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.154000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffda8bfa5b0 a3=0 items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.154000 audit: BPF prog-id=191 op=LOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda8bfa4c0 a2=94 a3=ffff items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.154000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffda8bfa4c0 a2=94 a3=ffff items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.154000 audit: BPF prog-id=192 op=LOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffda8bfa500 a2=94 a3=7ffda8bfa6e0 items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.154000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:12:17.154000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffda8bfa500 a2=94 a3=7ffda8bfa6e0 items=0 ppid=4064 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.154000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:17.264174 systemd-networkd[1500]: vxlan.calico: Link UP Jan 14 01:12:17.264188 systemd-networkd[1500]: vxlan.calico: Gained carrier Jan 14 01:12:17.323000 audit: BPF prog-id=193 op=LOAD Jan 14 01:12:17.323000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62cb6d90 a2=98 a3=0 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.323000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.323000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:12:17.323000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd62cb6d60 a3=0 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.323000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.325000 audit: BPF prog-id=194 op=LOAD Jan 14 01:12:17.325000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62cb6ba0 a2=94 a3=54428f items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.325000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.326000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:12:17.326000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62cb6ba0 a2=94 a3=54428f items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.326000 audit: BPF prog-id=195 op=LOAD Jan 14 01:12:17.326000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62cb6bd0 a2=94 a3=2 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.326000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:12:17.326000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62cb6bd0 a2=0 a3=2 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.326000 audit: BPF prog-id=196 op=LOAD Jan 14 01:12:17.326000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62cb6980 a2=94 a3=4 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.327000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:12:17.327000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd62cb6980 a2=94 a3=4 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.327000 audit: BPF prog-id=197 op=LOAD Jan 14 01:12:17.327000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62cb6a80 a2=94 a3=7ffd62cb6c00 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.328000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:12:17.328000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd62cb6a80 a2=0 a3=7ffd62cb6c00 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.328000 audit: BPF prog-id=198 op=LOAD Jan 14 01:12:17.328000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62cb61b0 a2=94 a3=2 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.328000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:12:17.328000 audit[4196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd62cb61b0 a2=0 a3=2 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.328000 audit: BPF prog-id=199 op=LOAD Jan 14 01:12:17.328000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd62cb62b0 a2=94 a3=30 items=0 ppid=4064 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:17.347000 audit: BPF prog-id=200 op=LOAD Jan 14 01:12:17.347000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffedf738820 a2=98 a3=0 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.347000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:12:17.347000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffedf7387f0 a3=0 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.348000 audit: BPF prog-id=201 op=LOAD Jan 14 01:12:17.348000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedf738610 a2=94 a3=54428f items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.348000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:12:17.348000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedf738610 a2=94 a3=54428f items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.348000 audit: BPF prog-id=202 op=LOAD Jan 14 01:12:17.348000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedf738640 a2=94 a3=2 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.348000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:12:17.348000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedf738640 a2=0 a3=2 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.348000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.614000 audit: BPF prog-id=203 op=LOAD Jan 14 01:12:17.614000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffedf738500 a2=94 a3=1 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.614000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.614000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:12:17.614000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffedf738500 a2=94 a3=1 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.614000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.638000 audit: BPF prog-id=204 op=LOAD Jan 14 01:12:17.638000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedf7384f0 a2=94 a3=4 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.638000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:12:17.638000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffedf7384f0 a2=0 a3=4 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.639000 audit: BPF prog-id=205 op=LOAD Jan 14 01:12:17.639000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffedf738350 a2=94 a3=5 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.639000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:12:17.639000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffedf738350 a2=0 a3=5 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.639000 audit: BPF prog-id=206 op=LOAD Jan 14 01:12:17.639000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedf738570 a2=94 a3=6 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.639000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.640000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:12:17.640000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffedf738570 a2=0 a3=6 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.641000 audit: BPF prog-id=207 op=LOAD Jan 14 01:12:17.641000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffedf737d20 a2=94 a3=88 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.641000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.641000 audit: BPF prog-id=208 op=LOAD Jan 14 01:12:17.641000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffedf737ba0 a2=94 a3=2 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.641000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.642000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:12:17.642000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffedf737bd0 a2=0 a3=7ffedf737cd0 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.642000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.642000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:12:17.642000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1f26ed10 a2=0 a3=1c2b3289203ee941 items=0 ppid=4064 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.642000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:17.653000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:12:17.653000 audit[4064]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0010b3580 a2=0 a3=0 items=0 ppid=4044 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.653000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:12:17.749000 audit[4223]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:17.749000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe15691770 a2=0 a3=7ffe1569175c items=0 ppid=4064 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.749000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:17.753000 audit[4227]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:17.753000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe13395120 a2=0 a3=7ffe1339510c items=0 ppid=4064 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.753000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:17.763000 audit[4226]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:17.763000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd34176370 a2=0 a3=7ffd3417635c items=0 ppid=4064 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.763000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:17.774000 audit[4230]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:17.774000 audit[4230]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdd3cd4700 a2=0 a3=7ffdd3cd46ec items=0 ppid=4064 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:17.882492 kubelet[2860]: E0114 01:12:17.882324 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:12:17.936000 audit[4239]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:17.936000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4d4b52f0 a2=0 a3=7ffe4d4b52dc items=0 ppid=2970 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:17.943000 audit[4239]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:17.943000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe4d4b52f0 a2=0 a3=0 items=0 ppid=2970 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:18.524352 containerd[1591]: time="2026-01-14T01:12:18.524273023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-598dd,Uid:9c51b2de-d8d9-4e42-af77-3bf2696395e2,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:18.714154 systemd-networkd[1500]: cali282026fa0cb: Link UP Jan 14 01:12:18.716533 systemd-networkd[1500]: cali282026fa0cb: Gained carrier Jan 14 01:12:18.752823 containerd[1591]: 2026-01-14 01:12:18.593 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0 csi-node-driver- calico-system 9c51b2de-d8d9-4e42-af77-3bf2696395e2 778 0 2026-01-14 01:11:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b csi-node-driver-598dd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali282026fa0cb [] [] }} ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-" Jan 14 01:12:18.752823 containerd[1591]: 2026-01-14 01:12:18.594 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.752823 containerd[1591]: 2026-01-14 01:12:18.641 [INFO][4255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" HandleID="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.641 [INFO][4255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" HandleID="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"csi-node-driver-598dd", "timestamp":"2026-01-14 01:12:18.641569202 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.641 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.642 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.642 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.654 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.662 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.670 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.675 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.753736 containerd[1591]: 2026-01-14 01:12:18.680 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.680 [INFO][4255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.684 [INFO][4255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.692 [INFO][4255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.702 [INFO][4255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.66/26] block=192.168.60.64/26 handle="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.702 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.66/26] handle="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.702 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:18.754835 containerd[1591]: 2026-01-14 01:12:18.702 [INFO][4255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.66/26] IPv6=[] ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" HandleID="k8s-pod-network.5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.755216 containerd[1591]: 2026-01-14 01:12:18.707 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c51b2de-d8d9-4e42-af77-3bf2696395e2", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"csi-node-driver-598dd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali282026fa0cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:18.755348 containerd[1591]: 2026-01-14 01:12:18.707 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.66/32] ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.755348 containerd[1591]: 2026-01-14 01:12:18.708 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali282026fa0cb ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.755348 containerd[1591]: 2026-01-14 01:12:18.715 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.757525 containerd[1591]: 2026-01-14 01:12:18.719 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c51b2de-d8d9-4e42-af77-3bf2696395e2", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf", Pod:"csi-node-driver-598dd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali282026fa0cb", MAC:"ee:e3:1f:94:76:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:18.757876 containerd[1591]: 2026-01-14 01:12:18.747 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" Namespace="calico-system" Pod="csi-node-driver-598dd" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-csi--node--driver--598dd-eth0" Jan 14 01:12:18.770000 audit[4270]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:18.770000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffedf45a320 a2=0 a3=7ffedf45a30c items=0 ppid=4064 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.770000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:18.801441 containerd[1591]: time="2026-01-14T01:12:18.801189439Z" level=info msg="connecting to shim 5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf" address="unix:///run/containerd/s/1aadc55367c1f3d95c06795444402065375096fb1887b1e4db3abffeffe5d057" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:18.864157 systemd[1]: Started cri-containerd-5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf.scope - libcontainer container 5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf. Jan 14 01:12:18.884000 audit: BPF prog-id=209 op=LOAD Jan 14 01:12:18.885000 audit: BPF prog-id=210 op=LOAD Jan 14 01:12:18.885000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.885000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:12:18.885000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.885000 audit: BPF prog-id=211 op=LOAD Jan 14 01:12:18.885000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.886000 audit: BPF prog-id=212 op=LOAD Jan 14 01:12:18.886000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.886000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:12:18.886000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.886000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:12:18.886000 audit[4290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.886000 audit: BPF prog-id=213 op=LOAD Jan 14 01:12:18.886000 audit[4290]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4279 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:18.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616563396631323935396430316138376262633765363932353939 Jan 14 01:12:18.911401 containerd[1591]: time="2026-01-14T01:12:18.911351415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-598dd,Uid:9c51b2de-d8d9-4e42-af77-3bf2696395e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5aaec9f12959d01a87bbc7e692599d7a51a8e20e6da9f276e379e1b8633673cf\"" Jan 14 01:12:18.915740 containerd[1591]: time="2026-01-14T01:12:18.914479035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:12:19.218698 containerd[1591]: time="2026-01-14T01:12:19.218432989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:19.220427 containerd[1591]: time="2026-01-14T01:12:19.220269331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:12:19.220427 containerd[1591]: time="2026-01-14T01:12:19.220391331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:19.221693 kubelet[2860]: E0114 01:12:19.220638 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:19.221693 kubelet[2860]: E0114 01:12:19.220749 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:19.221693 kubelet[2860]: E0114 01:12:19.220890 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:19.234797 containerd[1591]: time="2026-01-14T01:12:19.234606786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:12:19.246937 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Jan 14 01:12:19.523690 containerd[1591]: time="2026-01-14T01:12:19.523509795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-hhclj,Uid:c98f05e3-1f7b-4c82-ac93-d4261901eb6e,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:19.526306 containerd[1591]: time="2026-01-14T01:12:19.526243473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557db9b6c8-cw62p,Uid:7517aecc-466a-4034-a381-03ea5ddb0673,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:19.568118 containerd[1591]: time="2026-01-14T01:12:19.567920281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:19.570231 containerd[1591]: time="2026-01-14T01:12:19.570173955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:12:19.570460 containerd[1591]: time="2026-01-14T01:12:19.570308708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:19.571476 kubelet[2860]: E0114 01:12:19.570812 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:19.571476 kubelet[2860]: E0114 01:12:19.570876 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:19.571476 kubelet[2860]: E0114 01:12:19.570987 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:19.571476 kubelet[2860]: E0114 01:12:19.571044 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:19.826757 systemd-networkd[1500]: cali4354032ee22: Link UP Jan 14 01:12:19.832826 systemd-networkd[1500]: cali4354032ee22: Gained carrier Jan 14 01:12:19.880825 containerd[1591]: 2026-01-14 01:12:19.619 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0 calico-kube-controllers-557db9b6c8- calico-system 7517aecc-466a-4034-a381-03ea5ddb0673 905 0 2026-01-14 01:11:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:557db9b6c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b calico-kube-controllers-557db9b6c8-cw62p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4354032ee22 [] [] }} ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-" Jan 14 01:12:19.880825 containerd[1591]: 2026-01-14 01:12:19.621 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.880825 containerd[1591]: 2026-01-14 01:12:19.717 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" HandleID="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.717 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" HandleID="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"calico-kube-controllers-557db9b6c8-cw62p", "timestamp":"2026-01-14 01:12:19.717389966 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.717 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.717 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.717 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.736 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.753 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.773 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.776 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.881732 containerd[1591]: 2026-01-14 01:12:19.781 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.781 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.784 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05 Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.792 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.67/26] block=192.168.60.64/26 handle="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.67/26] handle="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:19.882151 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.67/26] IPv6=[] ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" HandleID="k8s-pod-network.ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.882463 containerd[1591]: 2026-01-14 01:12:19.820 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0", GenerateName:"calico-kube-controllers-557db9b6c8-", Namespace:"calico-system", SelfLink:"", UID:"7517aecc-466a-4034-a381-03ea5ddb0673", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557db9b6c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"calico-kube-controllers-557db9b6c8-cw62p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4354032ee22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:19.882605 containerd[1591]: 2026-01-14 01:12:19.820 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.67/32] ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.882605 containerd[1591]: 2026-01-14 01:12:19.820 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4354032ee22 ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.882605 containerd[1591]: 2026-01-14 01:12:19.836 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.888892 containerd[1591]: 2026-01-14 01:12:19.846 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0", GenerateName:"calico-kube-controllers-557db9b6c8-", Namespace:"calico-system", SelfLink:"", UID:"7517aecc-466a-4034-a381-03ea5ddb0673", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"557db9b6c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05", Pod:"calico-kube-controllers-557db9b6c8-cw62p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4354032ee22", MAC:"d2:7d:8c:e9:ba:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:19.889066 containerd[1591]: 2026-01-14 01:12:19.875 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" Namespace="calico-system" Pod="calico-kube-controllers-557db9b6c8-cw62p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--kube--controllers--557db9b6c8--cw62p-eth0" Jan 14 01:12:19.904173 kubelet[2860]: E0114 01:12:19.904012 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:19.964220 containerd[1591]: time="2026-01-14T01:12:19.963989221Z" level=info msg="connecting to shim ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05" address="unix:///run/containerd/s/8563ac1d7bb1a833b2c7375bd472be3260cd017659f61dbc87e2bb9b13ac8465" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:20.017850 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 14 01:12:20.018018 kernel: audit: type=1325 audit(1768353140.011:665): table=filter:126 family=2 entries=46 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:20.011000 audit[4386]: NETFILTER_CFG table=filter:126 family=2 entries=46 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:20.011000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7ffc31759d80 a2=0 a3=7ffc31759d6c items=0 ppid=4064 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.028717 kernel: audit: type=1300 audit(1768353140.011:665): arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7ffc31759d80 a2=0 a3=7ffc31759d6c items=0 ppid=4064 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.011000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:20.037819 kernel: audit: type=1327 audit(1768353140.011:665): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:20.051246 systemd-networkd[1500]: calib4971634fd5: Link UP Jan 14 01:12:20.054339 systemd-networkd[1500]: calib4971634fd5: Gained carrier Jan 14 01:12:20.056069 systemd[1]: Started cri-containerd-ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05.scope - libcontainer container ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05. Jan 14 01:12:20.090925 containerd[1591]: 2026-01-14 01:12:19.634 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0 calico-apiserver-5dc87cd5bb- calico-apiserver c98f05e3-1f7b-4c82-ac93-d4261901eb6e 901 0 2026-01-14 01:11:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc87cd5bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b calico-apiserver-5dc87cd5bb-hhclj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4971634fd5 [] [] }} ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-" Jan 14 01:12:20.090925 containerd[1591]: 2026-01-14 01:12:19.635 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.090925 containerd[1591]: 2026-01-14 01:12:19.745 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" HandleID="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.748 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" HandleID="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330f00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"calico-apiserver-5dc87cd5bb-hhclj", "timestamp":"2026-01-14 01:12:19.745737499 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.749 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.814 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.862 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.888 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.944 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.955 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.092639 containerd[1591]: 2026-01-14 01:12:19.962 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:19.964 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:19.969 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:19.996 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:20.019 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.68/26] block=192.168.60.64/26 handle="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:20.019 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.68/26] handle="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:20.019 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:20.095182 containerd[1591]: 2026-01-14 01:12:20.019 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.68/26] IPv6=[] ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" HandleID="k8s-pod-network.1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.095594 containerd[1591]: 2026-01-14 01:12:20.029 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0", GenerateName:"calico-apiserver-5dc87cd5bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c98f05e3-1f7b-4c82-ac93-d4261901eb6e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc87cd5bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"calico-apiserver-5dc87cd5bb-hhclj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4971634fd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:20.099377 containerd[1591]: 2026-01-14 01:12:20.029 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.68/32] ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.099377 containerd[1591]: 2026-01-14 01:12:20.029 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4971634fd5 ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.099377 containerd[1591]: 2026-01-14 01:12:20.055 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.099565 containerd[1591]: 2026-01-14 01:12:20.062 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0", GenerateName:"calico-apiserver-5dc87cd5bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c98f05e3-1f7b-4c82-ac93-d4261901eb6e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc87cd5bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc", Pod:"calico-apiserver-5dc87cd5bb-hhclj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4971634fd5", MAC:"b6:68:ba:3f:77:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:20.099903 containerd[1591]: 2026-01-14 01:12:20.082 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-hhclj" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--hhclj-eth0" Jan 14 01:12:20.174000 audit: BPF prog-id=214 op=LOAD Jan 14 01:12:20.175428 containerd[1591]: time="2026-01-14T01:12:20.174721300Z" level=info msg="connecting to shim 1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc" address="unix:///run/containerd/s/d676097b54d1c2f8379fbe24df460d5655446ddfffe6605578755b36a061e6c3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:20.178090 kernel: audit: type=1334 audit(1768353140.174:666): prog-id=214 op=LOAD Jan 14 01:12:20.179000 audit: BPF prog-id=215 op=LOAD Jan 14 01:12:20.182920 kernel: audit: type=1334 audit(1768353140.179:667): prog-id=215 op=LOAD Jan 14 01:12:20.179000 audit[4380]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.190738 kernel: audit: type=1300 audit(1768353140.179:667): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.199704 kernel: audit: type=1327 audit(1768353140.179:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.199867 kernel: audit: type=1334 audit(1768353140.180:668): prog-id=215 op=UNLOAD Jan 14 01:12:20.201758 kernel: audit: type=1300 audit(1768353140.180:668): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.180000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:12:20.180000 audit[4380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.214839 kernel: audit: type=1327 audit(1768353140.180:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.181000 audit: BPF prog-id=216 op=LOAD Jan 14 01:12:20.181000 audit[4380]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.181000 audit: BPF prog-id=217 op=LOAD Jan 14 01:12:20.181000 audit[4380]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.181000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:12:20.181000 audit[4380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.181000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:12:20.181000 audit[4380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.182000 audit: BPF prog-id=218 op=LOAD Jan 14 01:12:20.182000 audit[4380]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4369 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564653738623837396530653363326234623930313332323861643535 Jan 14 01:12:20.246117 systemd[1]: Started cri-containerd-1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc.scope - libcontainer container 1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc. Jan 14 01:12:20.254000 audit[4443]: NETFILTER_CFG table=filter:127 family=2 entries=54 op=nft_register_chain pid=4443 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:20.254000 audit[4443]: SYSCALL arch=c000003e syscall=46 success=yes exit=29380 a0=3 a1=7ffdeda939f0 a2=0 a3=7ffdeda939dc items=0 ppid=4064 pid=4443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.254000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:20.313000 audit: BPF prog-id=219 op=LOAD Jan 14 01:12:20.314000 audit: BPF prog-id=220 op=LOAD Jan 14 01:12:20.314000 audit[4430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.315000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:12:20.315000 audit[4430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.315000 audit: BPF prog-id=221 op=LOAD Jan 14 01:12:20.315000 audit[4430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.316000 audit: BPF prog-id=222 op=LOAD Jan 14 01:12:20.316000 audit[4430]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.317000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:12:20.317000 audit[4430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.317000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:12:20.317000 audit[4430]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.317000 audit: BPF prog-id=223 op=LOAD Jan 14 01:12:20.317000 audit[4430]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4418 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303739323065613234643933356361373934333433633330623463 Jan 14 01:12:20.333890 systemd-networkd[1500]: cali282026fa0cb: Gained IPv6LL Jan 14 01:12:20.410711 containerd[1591]: time="2026-01-14T01:12:20.409558725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-557db9b6c8-cw62p,Uid:7517aecc-466a-4034-a381-03ea5ddb0673,Namespace:calico-system,Attempt:0,} returns sandbox id \"ede78b879e0e3c2b4b9013228ad55d29a12b1433a52a8993dc9372199f306e05\"" Jan 14 01:12:20.415697 containerd[1591]: time="2026-01-14T01:12:20.415290481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:20.460123 containerd[1591]: time="2026-01-14T01:12:20.460038923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-hhclj,Uid:c98f05e3-1f7b-4c82-ac93-d4261901eb6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1207920ea24d935ca794343c30b4c45cca663566734f07058f4833928afde1bc\"" Jan 14 01:12:20.525767 kubelet[2860]: E0114 01:12:20.525627 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:20.528775 containerd[1591]: time="2026-01-14T01:12:20.526959888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ns47v,Uid:6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:20.754395 systemd-networkd[1500]: cali1af4235bdea: Link UP Jan 14 01:12:20.758713 systemd-networkd[1500]: cali1af4235bdea: Gained carrier Jan 14 01:12:20.767288 containerd[1591]: time="2026-01-14T01:12:20.766362618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:20.772009 containerd[1591]: time="2026-01-14T01:12:20.770820472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:20.772683 containerd[1591]: time="2026-01-14T01:12:20.770962784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:20.773574 kubelet[2860]: E0114 01:12:20.772848 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:20.773574 kubelet[2860]: E0114 01:12:20.772902 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:20.773574 kubelet[2860]: E0114 01:12:20.773157 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557db9b6c8-cw62p_calico-system(7517aecc-466a-4034-a381-03ea5ddb0673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:20.773574 kubelet[2860]: E0114 01:12:20.773202 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:20.776224 containerd[1591]: time="2026-01-14T01:12:20.774200432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:20.796050 containerd[1591]: 2026-01-14 01:12:20.603 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0 coredns-66bc5c9577- kube-system 6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c 892 0 2026-01-14 01:11:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b coredns-66bc5c9577-ns47v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1af4235bdea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-" Jan 14 01:12:20.796050 containerd[1591]: 2026-01-14 01:12:20.603 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.796050 containerd[1591]: 2026-01-14 01:12:20.668 [INFO][4475] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" HandleID="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.669 [INFO][4475] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" HandleID="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"coredns-66bc5c9577-ns47v", "timestamp":"2026-01-14 01:12:20.668849881 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.669 [INFO][4475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.669 [INFO][4475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.669 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.681 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.689 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.702 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.706 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.796327 containerd[1591]: 2026-01-14 01:12:20.710 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.710 [INFO][4475] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.712 [INFO][4475] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6 Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.721 [INFO][4475] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.735 [INFO][4475] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.69/26] block=192.168.60.64/26 handle="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.735 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.69/26] handle="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.735 [INFO][4475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:20.797419 containerd[1591]: 2026-01-14 01:12:20.736 [INFO][4475] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.69/26] IPv6=[] ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" HandleID="k8s-pod-network.2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.798519 containerd[1591]: 2026-01-14 01:12:20.740 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"coredns-66bc5c9577-ns47v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1af4235bdea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:20.798519 containerd[1591]: 2026-01-14 01:12:20.741 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.69/32] ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.798519 containerd[1591]: 2026-01-14 01:12:20.741 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af4235bdea ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.798519 containerd[1591]: 2026-01-14 01:12:20.755 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.798519 containerd[1591]: 2026-01-14 01:12:20.756 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6", Pod:"coredns-66bc5c9577-ns47v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1af4235bdea", MAC:"16:4e:f9:6f:71:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:20.799009 containerd[1591]: 2026-01-14 01:12:20.786 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" Namespace="kube-system" Pod="coredns-66bc5c9577-ns47v" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--ns47v-eth0" Jan 14 01:12:20.853132 containerd[1591]: time="2026-01-14T01:12:20.853003579Z" level=info msg="connecting to shim 2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6" address="unix:///run/containerd/s/25edbcb74bf8d88a96ae187412bc148897ea9299f5a7aeddd842da4110009411" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:20.898000 audit[4518]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=4518 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:20.898000 audit[4518]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7ffffdd94070 a2=0 a3=7ffffdd9405c items=0 ppid=4064 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.898000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:20.917096 systemd[1]: Started cri-containerd-2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6.scope - libcontainer container 2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6. Jan 14 01:12:20.938052 kubelet[2860]: E0114 01:12:20.937715 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:20.940144 kubelet[2860]: E0114 01:12:20.939855 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:20.949000 audit: BPF prog-id=224 op=LOAD Jan 14 01:12:20.952000 audit: BPF prog-id=225 op=LOAD Jan 14 01:12:20.952000 audit[4512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.952000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:12:20.952000 audit[4512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.952000 audit: BPF prog-id=226 op=LOAD Jan 14 01:12:20.952000 audit[4512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.952000 audit: BPF prog-id=227 op=LOAD Jan 14 01:12:20.952000 audit[4512]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.953000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:12:20.953000 audit[4512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.953000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:12:20.953000 audit[4512]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:20.954000 audit: BPF prog-id=228 op=LOAD Jan 14 01:12:20.954000 audit[4512]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4497 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:20.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266363937326464323066636431386334303233626664643537323531 Jan 14 01:12:21.054128 containerd[1591]: time="2026-01-14T01:12:21.052650769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ns47v,Uid:6bdb99b4-4ad9-4667-bc93-4c5cc3fb867c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6\"" Jan 14 01:12:21.057749 kubelet[2860]: E0114 01:12:21.057276 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:21.071621 containerd[1591]: time="2026-01-14T01:12:21.071568015Z" level=info msg="CreateContainer within sandbox \"2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:12:21.097824 containerd[1591]: time="2026-01-14T01:12:21.097651601Z" level=info msg="Container 883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:21.102864 systemd-networkd[1500]: cali4354032ee22: Gained IPv6LL Jan 14 01:12:21.118836 containerd[1591]: time="2026-01-14T01:12:21.118382425Z" level=info msg="CreateContainer within sandbox \"2f6972dd20fcd18c4023bfdd57251558ecd588f8ae973444e84f243e37f6dba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571\"" Jan 14 01:12:21.121740 containerd[1591]: time="2026-01-14T01:12:21.121689795Z" level=info msg="StartContainer for \"883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571\"" Jan 14 01:12:21.126414 containerd[1591]: time="2026-01-14T01:12:21.126358435Z" level=info msg="connecting to shim 883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571" address="unix:///run/containerd/s/25edbcb74bf8d88a96ae187412bc148897ea9299f5a7aeddd842da4110009411" protocol=ttrpc version=3 Jan 14 01:12:21.139540 containerd[1591]: time="2026-01-14T01:12:21.139466406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:21.141811 containerd[1591]: time="2026-01-14T01:12:21.141631611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:21.141811 containerd[1591]: time="2026-01-14T01:12:21.141762247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:21.142908 kubelet[2860]: E0114 01:12:21.141943 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:21.142908 kubelet[2860]: E0114 01:12:21.142008 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:21.142908 kubelet[2860]: E0114 01:12:21.142140 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-hhclj_calico-apiserver(c98f05e3-1f7b-4c82-ac93-d4261901eb6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:21.142908 kubelet[2860]: E0114 01:12:21.142190 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:12:21.170381 systemd[1]: Started cri-containerd-883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571.scope - libcontainer container 883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571. Jan 14 01:12:21.204000 audit: BPF prog-id=229 op=LOAD Jan 14 01:12:21.206000 audit: BPF prog-id=230 op=LOAD Jan 14 01:12:21.206000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.206000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:12:21.206000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.207000 audit: BPF prog-id=231 op=LOAD Jan 14 01:12:21.207000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.208000 audit: BPF prog-id=232 op=LOAD Jan 14 01:12:21.208000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.209000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:12:21.209000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.209000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:12:21.209000 audit[4537]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.209000 audit: BPF prog-id=233 op=LOAD Jan 14 01:12:21.209000 audit[4537]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4497 pid=4537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:21.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838336138326133376430343662383639633733353664656663656662 Jan 14 01:12:21.230054 systemd-networkd[1500]: calib4971634fd5: Gained IPv6LL Jan 14 01:12:21.250466 containerd[1591]: time="2026-01-14T01:12:21.250304984Z" level=info msg="StartContainer for \"883a82a37d046b869c7356defcefb3608e23f5be7e2da44b78695fcb4f130571\" returns successfully" Jan 14 01:12:21.526480 containerd[1591]: time="2026-01-14T01:12:21.525948736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r79p,Uid:374e8a11-39d9-48c5-bcf7-672988f566dc,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:21.530078 containerd[1591]: time="2026-01-14T01:12:21.530020221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-56bxg,Uid:d9fd657d-8a4b-430b-8046-e1faf50ffec5,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:21.545391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063605089.mount: Deactivated successfully. Jan 14 01:12:21.554294 kubelet[2860]: E0114 01:12:21.554227 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:21.557493 containerd[1591]: time="2026-01-14T01:12:21.556058733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p7mjs,Uid:466c1094-6e4a-41dc-8cce-7e44fc6de59f,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:21.878501 systemd-networkd[1500]: califbf4dccde7f: Link UP Jan 14 01:12:21.879820 systemd-networkd[1500]: califbf4dccde7f: Gained carrier Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.708 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0 goldmane-7c778bb748- calico-system 374e8a11-39d9-48c5-bcf7-672988f566dc 902 0 2026-01-14 01:11:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b goldmane-7c778bb748-8r79p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califbf4dccde7f [] [] }} ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.708 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.777 [INFO][4603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" HandleID="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.778 [INFO][4603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" HandleID="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"goldmane-7c778bb748-8r79p", "timestamp":"2026-01-14 01:12:21.777869593 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.778 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.778 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.778 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.798 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.814 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.827 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.834 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.843 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.843 [INFO][4603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.847 [INFO][4603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51 Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.857 [INFO][4603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.867 [INFO][4603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.70/26] block=192.168.60.64/26 handle="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.868 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.70/26] handle="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.868 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:21.922327 containerd[1591]: 2026-01-14 01:12:21.868 [INFO][4603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.70/26] IPv6=[] ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" HandleID="k8s-pod-network.3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.873 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"374e8a11-39d9-48c5-bcf7-672988f566dc", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"goldmane-7c778bb748-8r79p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf4dccde7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.874 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.70/32] ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.874 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbf4dccde7f ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.889 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.891 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"374e8a11-39d9-48c5-bcf7-672988f566dc", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51", Pod:"goldmane-7c778bb748-8r79p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbf4dccde7f", MAC:"32:e4:9f:a0:57:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:21.926061 containerd[1591]: 2026-01-14 01:12:21.917 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" Namespace="calico-system" Pod="goldmane-7c778bb748-8r79p" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-goldmane--7c778bb748--8r79p-eth0" Jan 14 01:12:21.950010 kubelet[2860]: E0114 01:12:21.948463 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:12:21.951893 kubelet[2860]: E0114 01:12:21.950562 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:21.956061 kubelet[2860]: E0114 01:12:21.955845 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:22.008940 containerd[1591]: time="2026-01-14T01:12:22.008864705Z" level=info msg="connecting to shim 3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51" address="unix:///run/containerd/s/ab5e2691fa94b3cc183d9d69776fc20d37ce6347701c79478161d957c50c2429" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:22.016434 kubelet[2860]: I0114 01:12:22.016261 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ns47v" podStartSLOduration=45.005913411 podStartE2EDuration="45.005913411s" podCreationTimestamp="2026-01-14 01:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:12:22.005742584 +0000 UTC m=+49.754743733" watchObservedRunningTime="2026-01-14 01:12:22.005913411 +0000 UTC m=+49.754914460" Jan 14 01:12:22.070060 systemd-networkd[1500]: calie753181708d: Link UP Jan 14 01:12:22.071901 systemd-networkd[1500]: calie753181708d: Gained carrier Jan 14 01:12:22.131303 systemd[1]: Started cri-containerd-3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51.scope - libcontainer container 3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51. Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.761 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0 calico-apiserver-5dc87cd5bb- calico-apiserver d9fd657d-8a4b-430b-8046-e1faf50ffec5 903 0 2026-01-14 01:11:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dc87cd5bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b calico-apiserver-5dc87cd5bb-56bxg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie753181708d [] [] }} ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.762 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.843 [INFO][4618] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" HandleID="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.843 [INFO][4618] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" HandleID="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"calico-apiserver-5dc87cd5bb-56bxg", "timestamp":"2026-01-14 01:12:21.843033387 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.843 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.868 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.868 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.898 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.913 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.932 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.939 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.952 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.953 [INFO][4618] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:21.972 [INFO][4618] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590 Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:22.013 [INFO][4618] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:22.036 [INFO][4618] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.71/26] block=192.168.60.64/26 handle="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:22.038 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.71/26] handle="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:22.039 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:22.146503 containerd[1591]: 2026-01-14 01:12:22.039 [INFO][4618] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.71/26] IPv6=[] ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" HandleID="k8s-pod-network.f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.054 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0", GenerateName:"calico-apiserver-5dc87cd5bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9fd657d-8a4b-430b-8046-e1faf50ffec5", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc87cd5bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"calico-apiserver-5dc87cd5bb-56bxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie753181708d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.054 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.71/32] ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.054 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie753181708d ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.077 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.083 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0", GenerateName:"calico-apiserver-5dc87cd5bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"d9fd657d-8a4b-430b-8046-e1faf50ffec5", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dc87cd5bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590", Pod:"calico-apiserver-5dc87cd5bb-56bxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie753181708d", MAC:"92:ef:6f:45:49:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:22.150427 containerd[1591]: 2026-01-14 01:12:22.140 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" Namespace="calico-apiserver" Pod="calico-apiserver-5dc87cd5bb-56bxg" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-calico--apiserver--5dc87cd5bb--56bxg-eth0" Jan 14 01:12:22.219129 containerd[1591]: time="2026-01-14T01:12:22.218303642Z" level=info msg="connecting to shim f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590" address="unix:///run/containerd/s/d63047b83e55733ff904d0f875fbd9b7010317d2fb5c8738a2f3026d3173d8e4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:22.252000 audit[4690]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:22.252000 audit[4690]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd443a0980 a2=0 a3=7ffd443a096c items=0 ppid=2970 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:22.256530 systemd-networkd[1500]: caliae3b6df5079: Link UP Jan 14 01:12:22.260075 systemd-networkd[1500]: caliae3b6df5079: Gained carrier Jan 14 01:12:22.261000 audit[4690]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:22.261000 audit[4690]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd443a0980 a2=0 a3=0 items=0 ppid=2970 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:21.752 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0 coredns-66bc5c9577- kube-system 466c1094-6e4a-41dc-8cce-7e44fc6de59f 904 0 2026-01-14 01:11:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-c80f5dee3b coredns-66bc5c9577-p7mjs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae3b6df5079 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:21.753 [INFO][4589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:21.856 [INFO][4612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" HandleID="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:21.857 [INFO][4612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" HandleID="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-c80f5dee3b", "pod":"coredns-66bc5c9577-p7mjs", "timestamp":"2026-01-14 01:12:21.856155983 +0000 UTC"}, Hostname:"ci-4578.0.0-p-c80f5dee3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:21.859 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.039 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.039 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-c80f5dee3b' Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.063 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.101 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.152 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.157 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.168 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.168 [INFO][4612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.172 [INFO][4612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.183 [INFO][4612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.208 [INFO][4612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.72/26] block=192.168.60.64/26 handle="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.209 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.72/26] handle="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" host="ci-4578.0.0-p-c80f5dee3b" Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.209 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:22.297514 containerd[1591]: 2026-01-14 01:12:22.209 [INFO][4612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.72/26] IPv6=[] ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" HandleID="k8s-pod-network.5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Workload="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.300644 containerd[1591]: 2026-01-14 01:12:22.221 [INFO][4589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"466c1094-6e4a-41dc-8cce-7e44fc6de59f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"", Pod:"coredns-66bc5c9577-p7mjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae3b6df5079", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:22.300644 containerd[1591]: 2026-01-14 01:12:22.224 [INFO][4589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.72/32] ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.300644 containerd[1591]: 2026-01-14 01:12:22.224 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae3b6df5079 ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.300644 containerd[1591]: 2026-01-14 01:12:22.261 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.300644 containerd[1591]: 2026-01-14 01:12:22.269 [INFO][4589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"466c1094-6e4a-41dc-8cce-7e44fc6de59f", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-c80f5dee3b", ContainerID:"5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d", Pod:"coredns-66bc5c9577-p7mjs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae3b6df5079", MAC:"6e:bf:20:04:2c:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:22.303925 containerd[1591]: 2026-01-14 01:12:22.285 [INFO][4589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" Namespace="kube-system" Pod="coredns-66bc5c9577-p7mjs" WorkloadEndpoint="ci--4578.0.0--p--c80f5dee3b-k8s-coredns--66bc5c9577--p7mjs-eth0" Jan 14 01:12:22.317909 systemd-networkd[1500]: cali1af4235bdea: Gained IPv6LL Jan 14 01:12:22.326261 systemd[1]: Started cri-containerd-f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590.scope - libcontainer container f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590. Jan 14 01:12:22.362000 audit: BPF prog-id=234 op=LOAD Jan 14 01:12:22.363000 audit: BPF prog-id=235 op=LOAD Jan 14 01:12:22.363000 audit[4655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.363000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:12:22.363000 audit[4655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.364000 audit: BPF prog-id=236 op=LOAD Jan 14 01:12:22.364000 audit[4655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.364000 audit: BPF prog-id=237 op=LOAD Jan 14 01:12:22.364000 audit[4655]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.365000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:12:22.365000 audit[4655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.365000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:12:22.365000 audit[4655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.365000 audit: BPF prog-id=238 op=LOAD Jan 14 01:12:22.365000 audit[4655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4643 pid=4655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330303539313062393263666638363033363538303261613232363438 Jan 14 01:12:22.414000 audit: BPF prog-id=239 op=LOAD Jan 14 01:12:22.414000 audit: BPF prog-id=240 op=LOAD Jan 14 01:12:22.414000 audit[4707]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=241 op=LOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=242 op=LOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.415000 audit: BPF prog-id=243 op=LOAD Jan 14 01:12:22.415000 audit[4707]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4687 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633303464656430616432623739666331616362646566316166626464 Jan 14 01:12:22.461967 containerd[1591]: time="2026-01-14T01:12:22.461816038Z" level=info msg="connecting to shim 5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d" address="unix:///run/containerd/s/4cfc0356748c5483414e6f5744c3b75765eef0674e4eda5069e17fb4767e5f49" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:22.473000 audit[4732]: NETFILTER_CFG table=filter:131 family=2 entries=56 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:22.473000 audit[4732]: SYSCALL arch=c000003e syscall=46 success=yes exit=28728 a0=3 a1=7fffa85bfc00 a2=0 a3=7fffa85bfbec items=0 ppid=4064 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.473000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:22.527439 kubelet[2860]: I0114 01:12:22.527396 2860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:12:22.529918 kubelet[2860]: E0114 01:12:22.529854 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:22.531025 systemd[1]: Started cri-containerd-5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d.scope - libcontainer container 5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d. Jan 14 01:12:22.594000 audit: BPF prog-id=244 op=LOAD Jan 14 01:12:22.600000 audit: BPF prog-id=245 op=LOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=246 op=LOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=247 op=LOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.600000 audit: BPF prog-id=248 op=LOAD Jan 14 01:12:22.600000 audit[4752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4740 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564316265366664656335633230396534386235366463366234626661 Jan 14 01:12:22.674786 containerd[1591]: time="2026-01-14T01:12:22.674491626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dc87cd5bb-56bxg,Uid:d9fd657d-8a4b-430b-8046-e1faf50ffec5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f304ded0ad2b79fc1acbdef1afbdd6cb4d458556c625e988deb9450f7bd77590\"" Jan 14 01:12:22.741942 containerd[1591]: time="2026-01-14T01:12:22.741786486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r79p,Uid:374e8a11-39d9-48c5-bcf7-672988f566dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3005910b92cff860365802aa2264852fe9e5b88a1599c2f7f7edaf682ac61c51\"" Jan 14 01:12:22.745428 containerd[1591]: time="2026-01-14T01:12:22.745118684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:22.773000 audit[4795]: NETFILTER_CFG table=filter:132 family=2 entries=77 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:22.773000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=41692 a0=3 a1=7ffded8a4d00 a2=0 a3=7ffded8a4cec items=0 ppid=4064 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.773000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:22.802187 containerd[1591]: time="2026-01-14T01:12:22.801057058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p7mjs,Uid:466c1094-6e4a-41dc-8cce-7e44fc6de59f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d\"" Jan 14 01:12:22.809199 kubelet[2860]: E0114 01:12:22.807322 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:22.818583 containerd[1591]: time="2026-01-14T01:12:22.818525811Z" level=info msg="CreateContainer within sandbox \"5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:12:22.858442 containerd[1591]: time="2026-01-14T01:12:22.858369945Z" level=info msg="Container 8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:22.897640 containerd[1591]: time="2026-01-14T01:12:22.897588990Z" level=info msg="CreateContainer within sandbox \"5d1be6fdec5c209e48b56dc6b4bfa3a28e63a746c3cd4242bd7272986f81768d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4\"" Jan 14 01:12:22.900690 containerd[1591]: time="2026-01-14T01:12:22.900288351Z" level=info msg="StartContainer for \"8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4\"" Jan 14 01:12:22.902840 containerd[1591]: time="2026-01-14T01:12:22.902796597Z" level=info msg="connecting to shim 8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4" address="unix:///run/containerd/s/4cfc0356748c5483414e6f5744c3b75765eef0674e4eda5069e17fb4767e5f49" protocol=ttrpc version=3 Jan 14 01:12:22.965577 systemd[1]: Started cri-containerd-8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4.scope - libcontainer container 8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4. Jan 14 01:12:22.991103 kubelet[2860]: E0114 01:12:22.990961 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:23.013000 audit: BPF prog-id=249 op=LOAD Jan 14 01:12:23.015000 audit: BPF prog-id=250 op=LOAD Jan 14 01:12:23.015000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.015000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:12:23.015000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.016000 audit: BPF prog-id=251 op=LOAD Jan 14 01:12:23.016000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.016000 audit: BPF prog-id=252 op=LOAD Jan 14 01:12:23.016000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.016000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:12:23.016000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.016000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:12:23.016000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.016000 audit: BPF prog-id=253 op=LOAD Jan 14 01:12:23.016000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4740 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383737373830383331343765636565343737383065653363646434 Jan 14 01:12:23.062591 containerd[1591]: time="2026-01-14T01:12:23.061440494Z" level=info msg="StartContainer for \"8e87778083147ecee47780ee3cdd461e9383f3e36b5198a1a2a19b0a7c6ef3f4\" returns successfully" Jan 14 01:12:23.148842 containerd[1591]: time="2026-01-14T01:12:23.148126780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:23.153924 containerd[1591]: time="2026-01-14T01:12:23.152628515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:23.153924 containerd[1591]: time="2026-01-14T01:12:23.152633490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:23.154882 kubelet[2860]: E0114 01:12:23.154473 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:23.154882 kubelet[2860]: E0114 01:12:23.154530 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:23.155603 kubelet[2860]: E0114 01:12:23.155495 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-56bxg_calico-apiserver(d9fd657d-8a4b-430b-8046-e1faf50ffec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:23.156688 kubelet[2860]: E0114 01:12:23.155883 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:12:23.159091 containerd[1591]: time="2026-01-14T01:12:23.158307270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:12:23.282015 kubelet[2860]: E0114 01:12:23.280630 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:23.404000 audit[4890]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:23.404000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcbf87fdf0 a2=0 a3=7ffcbf87fddc items=0 ppid=2970 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:23.408845 systemd-networkd[1500]: califbf4dccde7f: Gained IPv6LL Jan 14 01:12:23.427000 audit[4890]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:23.427000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcbf87fdf0 a2=0 a3=7ffcbf87fddc items=0 ppid=2970 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:23.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:23.517917 containerd[1591]: time="2026-01-14T01:12:23.517833926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:23.520405 containerd[1591]: time="2026-01-14T01:12:23.520246467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:12:23.520405 containerd[1591]: time="2026-01-14T01:12:23.520302186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:23.521344 kubelet[2860]: E0114 01:12:23.520648 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:23.521344 kubelet[2860]: E0114 01:12:23.520748 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:23.521344 kubelet[2860]: E0114 01:12:23.520875 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8r79p_calico-system(374e8a11-39d9-48c5-bcf7-672988f566dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:23.523821 kubelet[2860]: E0114 01:12:23.521425 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:12:23.662056 systemd-networkd[1500]: calie753181708d: Gained IPv6LL Jan 14 01:12:23.981912 systemd-networkd[1500]: caliae3b6df5079: Gained IPv6LL Jan 14 01:12:23.995512 kubelet[2860]: E0114 01:12:23.995457 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:23.996577 kubelet[2860]: E0114 01:12:23.995948 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:23.998001 kubelet[2860]: E0114 01:12:23.997746 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:12:23.998750 kubelet[2860]: E0114 01:12:23.998262 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:12:24.079214 kubelet[2860]: I0114 01:12:24.079113 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p7mjs" podStartSLOduration=47.079083839 podStartE2EDuration="47.079083839s" podCreationTimestamp="2026-01-14 01:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:12:24.078100365 +0000 UTC m=+51.827101408" watchObservedRunningTime="2026-01-14 01:12:24.079083839 +0000 UTC m=+51.828084883" Jan 14 01:12:24.465000 audit[4893]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:24.465000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc833f5010 a2=0 a3=7ffc833f4ffc items=0 ppid=2970 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:24.488000 audit[4893]: NETFILTER_CFG table=nat:136 family=2 entries=56 op=nft_register_chain pid=4893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:24.488000 audit[4893]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc833f5010 a2=0 a3=7ffc833f4ffc items=0 ppid=2970 pid=4893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.488000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:24.998375 kubelet[2860]: E0114 01:12:24.998335 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:26.001453 kubelet[2860]: E0114 01:12:26.001306 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:30.523614 containerd[1591]: time="2026-01-14T01:12:30.523303929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:30.959160 containerd[1591]: time="2026-01-14T01:12:30.959102964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:30.962162 containerd[1591]: time="2026-01-14T01:12:30.962090711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:30.962380 containerd[1591]: time="2026-01-14T01:12:30.962216879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:30.962790 kubelet[2860]: E0114 01:12:30.962623 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:30.964331 kubelet[2860]: E0114 01:12:30.963732 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:30.964331 kubelet[2860]: E0114 01:12:30.963911 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:30.968054 containerd[1591]: time="2026-01-14T01:12:30.967984168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:31.288325 containerd[1591]: time="2026-01-14T01:12:31.288127303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:31.289776 containerd[1591]: time="2026-01-14T01:12:31.289580548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:31.289776 containerd[1591]: time="2026-01-14T01:12:31.289626325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:31.290188 kubelet[2860]: E0114 01:12:31.290103 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:31.290188 kubelet[2860]: E0114 01:12:31.290158 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:31.290789 kubelet[2860]: E0114 01:12:31.290712 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:31.291411 kubelet[2860]: E0114 01:12:31.291248 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:12:31.521462 containerd[1591]: time="2026-01-14T01:12:31.521408422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:12:31.864458 containerd[1591]: time="2026-01-14T01:12:31.864405791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:31.866217 containerd[1591]: time="2026-01-14T01:12:31.866154789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:12:31.866807 containerd[1591]: time="2026-01-14T01:12:31.866182162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:31.866909 kubelet[2860]: E0114 01:12:31.866453 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:31.867138 kubelet[2860]: E0114 01:12:31.866968 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:31.867775 kubelet[2860]: E0114 01:12:31.867719 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:31.869567 containerd[1591]: time="2026-01-14T01:12:31.869527692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:12:32.211721 containerd[1591]: time="2026-01-14T01:12:32.210993851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:32.215128 containerd[1591]: time="2026-01-14T01:12:32.214974551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:12:32.215128 containerd[1591]: time="2026-01-14T01:12:32.215023784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:32.215977 kubelet[2860]: E0114 01:12:32.215431 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:32.215977 kubelet[2860]: E0114 01:12:32.215483 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:32.215977 kubelet[2860]: E0114 01:12:32.215592 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:32.215977 kubelet[2860]: E0114 01:12:32.215657 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:32.706101 systemd[1]: Started sshd@9-143.198.154.109:22-4.153.228.146:35438.service - OpenSSH per-connection server daemon (4.153.228.146:35438). Jan 14 01:12:32.717231 kernel: kauditd_printk_skb: 199 callbacks suppressed Jan 14 01:12:32.717337 kernel: audit: type=1130 audit(1768353152.705:740): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.154.109:22-4.153.228.146:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:32.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.154.109:22-4.153.228.146:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:33.230000 audit[4912]: USER_ACCT pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.233700 sshd[4912]: Accepted publickey for core from 4.153.228.146 port 35438 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:33.238708 kernel: audit: type=1101 audit(1768353153.230:741): pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.240000 audit[4912]: CRED_ACQ pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.247168 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:33.250688 kernel: audit: type=1103 audit(1768353153.240:742): pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.263856 kernel: audit: type=1006 audit(1768353153.240:743): pid=4912 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:12:33.264051 kernel: audit: type=1300 audit(1768353153.240:743): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc955b15b0 a2=3 a3=0 items=0 ppid=1 pid=4912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:33.240000 audit[4912]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc955b15b0 a2=3 a3=0 items=0 ppid=1 pid=4912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:33.271941 systemd-logind[1569]: New session 11 of user core. Jan 14 01:12:33.240000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:33.281323 kernel: audit: type=1327 audit(1768353153.240:743): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:33.280101 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:12:33.285000 audit[4912]: USER_START pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.295704 kernel: audit: type=1105 audit(1768353153.285:744): pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.298000 audit[4918]: CRED_ACQ pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:33.312720 kernel: audit: type=1103 audit(1768353153.298:745): pid=4918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:34.218019 sshd[4918]: Connection closed by 4.153.228.146 port 35438 Jan 14 01:12:34.218984 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:34.229000 audit[4912]: USER_END pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:34.239704 kernel: audit: type=1106 audit(1768353154.229:746): pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:34.229000 audit[4912]: CRED_DISP pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:34.243425 systemd[1]: sshd@9-143.198.154.109:22-4.153.228.146:35438.service: Deactivated successfully. Jan 14 01:12:34.251186 kernel: audit: type=1104 audit(1768353154.229:747): pid=4912 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:34.250359 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:12:34.256285 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:12:34.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-143.198.154.109:22-4.153.228.146:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:34.260472 systemd-logind[1569]: Removed session 11. Jan 14 01:12:34.527880 containerd[1591]: time="2026-01-14T01:12:34.527601204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:12:34.877154 containerd[1591]: time="2026-01-14T01:12:34.877045248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:34.879210 containerd[1591]: time="2026-01-14T01:12:34.879109865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:12:34.879456 containerd[1591]: time="2026-01-14T01:12:34.879150018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:34.880683 kubelet[2860]: E0114 01:12:34.880211 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:34.880683 kubelet[2860]: E0114 01:12:34.880287 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:34.881317 kubelet[2860]: E0114 01:12:34.880951 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8r79p_calico-system(374e8a11-39d9-48c5-bcf7-672988f566dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:34.883067 containerd[1591]: time="2026-01-14T01:12:34.881865895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:34.915885 kubelet[2860]: E0114 01:12:34.881282 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:12:35.223344 containerd[1591]: time="2026-01-14T01:12:35.222714451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:35.225430 containerd[1591]: time="2026-01-14T01:12:35.225281291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:35.225430 containerd[1591]: time="2026-01-14T01:12:35.225300590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:35.226174 kubelet[2860]: E0114 01:12:35.226076 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:35.226510 kubelet[2860]: E0114 01:12:35.226382 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:35.227078 kubelet[2860]: E0114 01:12:35.227012 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557db9b6c8-cw62p_calico-system(7517aecc-466a-4034-a381-03ea5ddb0673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:35.227078 kubelet[2860]: E0114 01:12:35.227061 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:35.227698 containerd[1591]: time="2026-01-14T01:12:35.227348786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:35.587064 containerd[1591]: time="2026-01-14T01:12:35.586489354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:35.588774 containerd[1591]: time="2026-01-14T01:12:35.588677199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:35.589089 containerd[1591]: time="2026-01-14T01:12:35.588696430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:35.591894 kubelet[2860]: E0114 01:12:35.589871 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:35.591894 kubelet[2860]: E0114 01:12:35.589924 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:35.591894 kubelet[2860]: E0114 01:12:35.590011 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-56bxg_calico-apiserver(d9fd657d-8a4b-430b-8046-e1faf50ffec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:35.591894 kubelet[2860]: E0114 01:12:35.590051 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:12:37.520199 containerd[1591]: time="2026-01-14T01:12:37.519846642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:37.854695 containerd[1591]: time="2026-01-14T01:12:37.854173221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:37.857416 containerd[1591]: time="2026-01-14T01:12:37.856889987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:37.857416 containerd[1591]: time="2026-01-14T01:12:37.856905688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:37.857653 kubelet[2860]: E0114 01:12:37.857295 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:37.857653 kubelet[2860]: E0114 01:12:37.857368 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:37.857653 kubelet[2860]: E0114 01:12:37.857566 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-hhclj_calico-apiserver(c98f05e3-1f7b-4c82-ac93-d4261901eb6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:37.857653 kubelet[2860]: E0114 01:12:37.857614 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:12:39.299476 systemd[1]: Started sshd@10-143.198.154.109:22-4.153.228.146:59132.service - OpenSSH per-connection server daemon (4.153.228.146:59132). Jan 14 01:12:39.306211 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:39.306257 kernel: audit: type=1130 audit(1768353159.300:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.154.109:22-4.153.228.146:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:39.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.154.109:22-4.153.228.146:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:39.722119 sshd[4948]: Accepted publickey for core from 4.153.228.146 port 59132 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:39.721000 audit[4948]: USER_ACCT pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.726167 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:39.723000 audit[4948]: CRED_ACQ pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.731783 kernel: audit: type=1101 audit(1768353159.721:750): pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.731852 kernel: audit: type=1103 audit(1768353159.723:751): pid=4948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.737897 kernel: audit: type=1006 audit(1768353159.723:752): pid=4948 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:12:39.723000 audit[4948]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42bf5550 a2=3 a3=0 items=0 ppid=1 pid=4948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.723000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:39.750382 kernel: audit: type=1300 audit(1768353159.723:752): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42bf5550 a2=3 a3=0 items=0 ppid=1 pid=4948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.752373 kernel: audit: type=1327 audit(1768353159.723:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:39.755457 systemd-logind[1569]: New session 12 of user core. Jan 14 01:12:39.768528 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:12:39.782025 kernel: audit: type=1105 audit(1768353159.772:753): pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.772000 audit[4948]: USER_START pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.782000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:39.789721 kernel: audit: type=1103 audit(1768353159.782:754): pid=4952 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:40.123733 sshd[4952]: Connection closed by 4.153.228.146 port 59132 Jan 14 01:12:40.124407 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:40.128000 audit[4948]: USER_END pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:40.135742 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:12:40.136894 systemd[1]: sshd@10-143.198.154.109:22-4.153.228.146:59132.service: Deactivated successfully. Jan 14 01:12:40.138733 kernel: audit: type=1106 audit(1768353160.128:755): pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:40.143060 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:12:40.128000 audit[4948]: CRED_DISP pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:40.150688 kernel: audit: type=1104 audit(1768353160.128:756): pid=4948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:40.151287 systemd-logind[1569]: Removed session 12. Jan 14 01:12:40.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-143.198.154.109:22-4.153.228.146:59132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:45.214082 systemd[1]: Started sshd@11-143.198.154.109:22-4.153.228.146:51806.service - OpenSSH per-connection server daemon (4.153.228.146:51806). Jan 14 01:12:45.223506 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:45.223565 kernel: audit: type=1130 audit(1768353165.212:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.154.109:22-4.153.228.146:51806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:45.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.154.109:22-4.153.228.146:51806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:45.619000 audit[4967]: USER_ACCT pid=4967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.629696 kernel: audit: type=1101 audit(1768353165.619:759): pid=4967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.629829 sshd[4967]: Accepted publickey for core from 4.153.228.146 port 51806 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:45.633425 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:45.631000 audit[4967]: CRED_ACQ pid=4967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.646709 kernel: audit: type=1103 audit(1768353165.631:760): pid=4967 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.657537 kernel: audit: type=1006 audit(1768353165.631:761): pid=4967 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:12:45.657727 kernel: audit: type=1300 audit(1768353165.631:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef58d97f0 a2=3 a3=0 items=0 ppid=1 pid=4967 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:45.631000 audit[4967]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef58d97f0 a2=3 a3=0 items=0 ppid=1 pid=4967 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:45.665970 systemd-logind[1569]: New session 13 of user core. Jan 14 01:12:45.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:45.670688 kernel: audit: type=1327 audit(1768353165.631:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:45.672866 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:12:45.679000 audit[4967]: USER_START pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.689718 kernel: audit: type=1105 audit(1768353165.679:762): pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.691000 audit[4971]: CRED_ACQ pid=4971 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.701728 kernel: audit: type=1103 audit(1768353165.691:763): pid=4971 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.946741 sshd[4971]: Connection closed by 4.153.228.146 port 51806 Jan 14 01:12:45.948511 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:45.952000 audit[4967]: USER_END pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.960911 systemd[1]: sshd@11-143.198.154.109:22-4.153.228.146:51806.service: Deactivated successfully. Jan 14 01:12:45.963295 kernel: audit: type=1106 audit(1768353165.952:764): pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.967574 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:12:45.952000 audit[4967]: CRED_DISP pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.972783 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:12:45.975880 kernel: audit: type=1104 audit(1768353165.952:765): pid=4967 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:45.975175 systemd-logind[1569]: Removed session 13. Jan 14 01:12:45.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-143.198.154.109:22-4.153.228.146:51806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:46.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-143.198.154.109:22-4.153.228.146:51818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:46.026425 systemd[1]: Started sshd@12-143.198.154.109:22-4.153.228.146:51818.service - OpenSSH per-connection server daemon (4.153.228.146:51818). Jan 14 01:12:46.427598 sshd[4984]: Accepted publickey for core from 4.153.228.146 port 51818 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:46.426000 audit[4984]: USER_ACCT pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.429000 audit[4984]: CRED_ACQ pid=4984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.429000 audit[4984]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a2c1b90 a2=3 a3=0 items=0 ppid=1 pid=4984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:46.429000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:46.431815 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:46.441470 systemd-logind[1569]: New session 14 of user core. Jan 14 01:12:46.448956 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:12:46.454000 audit[4984]: USER_START pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.458000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.525835 kubelet[2860]: E0114 01:12:46.525624 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:12:46.527695 kubelet[2860]: E0114 01:12:46.527533 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:12:46.845798 sshd[4988]: Connection closed by 4.153.228.146 port 51818 Jan 14 01:12:46.845643 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:46.848000 audit[4984]: USER_END pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.848000 audit[4984]: CRED_DISP pid=4984 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:46.857459 systemd[1]: sshd@12-143.198.154.109:22-4.153.228.146:51818.service: Deactivated successfully. Jan 14 01:12:46.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-143.198.154.109:22-4.153.228.146:51818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:46.861949 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:12:46.864000 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:12:46.867568 systemd-logind[1569]: Removed session 14. Jan 14 01:12:46.934111 systemd[1]: Started sshd@13-143.198.154.109:22-4.153.228.146:51830.service - OpenSSH per-connection server daemon (4.153.228.146:51830). Jan 14 01:12:46.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-143.198.154.109:22-4.153.228.146:51830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:47.367000 audit[4998]: USER_ACCT pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.369819 sshd[4998]: Accepted publickey for core from 4.153.228.146 port 51830 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:47.370000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.370000 audit[4998]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2ddf1120 a2=3 a3=0 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:47.370000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:47.373225 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:47.390774 systemd-logind[1569]: New session 15 of user core. Jan 14 01:12:47.398073 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:12:47.403000 audit[4998]: USER_START pid=4998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.411000 audit[5002]: CRED_ACQ pid=5002 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.522694 kubelet[2860]: E0114 01:12:47.521955 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:12:47.522694 kubelet[2860]: E0114 01:12:47.522488 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:47.769545 sshd[5002]: Connection closed by 4.153.228.146 port 51830 Jan 14 01:12:47.771166 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:47.773000 audit[4998]: USER_END pid=4998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.775000 audit[4998]: CRED_DISP pid=4998 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:47.779790 systemd[1]: sshd@13-143.198.154.109:22-4.153.228.146:51830.service: Deactivated successfully. Jan 14 01:12:47.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-143.198.154.109:22-4.153.228.146:51830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:47.785781 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:12:47.787842 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:12:47.791860 systemd-logind[1569]: Removed session 15. Jan 14 01:12:48.524716 kubelet[2860]: E0114 01:12:48.524264 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:12:48.524716 kubelet[2860]: E0114 01:12:48.524410 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:12:49.520168 kubelet[2860]: E0114 01:12:49.520107 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:52.521703 kubelet[2860]: E0114 01:12:52.520277 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:52.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.154.109:22-4.153.228.146:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:52.845732 systemd[1]: Started sshd@14-143.198.154.109:22-4.153.228.146:51836.service - OpenSSH per-connection server daemon (4.153.228.146:51836). Jan 14 01:12:52.847977 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:12:52.848033 kernel: audit: type=1130 audit(1768353172.845:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.154.109:22-4.153.228.146:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:53.232000 audit[5021]: USER_ACCT pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.235396 sshd[5021]: Accepted publickey for core from 4.153.228.146 port 51836 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:53.239657 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:53.237000 audit[5021]: CRED_ACQ pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.241851 kernel: audit: type=1101 audit(1768353173.232:786): pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.241946 kernel: audit: type=1103 audit(1768353173.237:787): pid=5021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.251745 kernel: audit: type=1006 audit(1768353173.237:788): pid=5021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:12:53.252214 kernel: audit: type=1300 audit(1768353173.237:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb4a2fe80 a2=3 a3=0 items=0 ppid=1 pid=5021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:53.237000 audit[5021]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb4a2fe80 a2=3 a3=0 items=0 ppid=1 pid=5021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:53.237000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:53.264792 kernel: audit: type=1327 audit(1768353173.237:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:53.266013 systemd-logind[1569]: New session 16 of user core. Jan 14 01:12:53.272292 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:12:53.292586 kernel: audit: type=1105 audit(1768353173.279:789): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.279000 audit[5021]: USER_START pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.294000 audit[5025]: CRED_ACQ pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.303147 kernel: audit: type=1103 audit(1768353173.294:790): pid=5025 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.521881 kubelet[2860]: E0114 01:12:53.521075 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:53.615630 sshd[5025]: Connection closed by 4.153.228.146 port 51836 Jan 14 01:12:53.618133 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:53.625000 audit[5021]: USER_END pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.631977 systemd[1]: sshd@14-143.198.154.109:22-4.153.228.146:51836.service: Deactivated successfully. Jan 14 01:12:53.635725 kernel: audit: type=1106 audit(1768353173.625:791): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.637362 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:12:53.626000 audit[5021]: CRED_DISP pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.645830 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:12:53.646726 kernel: audit: type=1104 audit(1768353173.626:792): pid=5021 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:53.648777 systemd-logind[1569]: Removed session 16. Jan 14 01:12:53.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-143.198.154.109:22-4.153.228.146:51836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:54.519565 kubelet[2860]: E0114 01:12:54.519508 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:12:57.523206 containerd[1591]: time="2026-01-14T01:12:57.523144448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:57.876909 containerd[1591]: time="2026-01-14T01:12:57.876838255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:57.879178 containerd[1591]: time="2026-01-14T01:12:57.879117172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:57.879350 containerd[1591]: time="2026-01-14T01:12:57.879186850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:57.879608 kubelet[2860]: E0114 01:12:57.879569 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:57.880610 kubelet[2860]: E0114 01:12:57.880154 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:57.880610 kubelet[2860]: E0114 01:12:57.880338 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:57.881851 containerd[1591]: time="2026-01-14T01:12:57.881812642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:58.252346 containerd[1591]: time="2026-01-14T01:12:58.251747214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:58.254188 containerd[1591]: time="2026-01-14T01:12:58.254112679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:58.254351 containerd[1591]: time="2026-01-14T01:12:58.254260163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:58.255126 kubelet[2860]: E0114 01:12:58.255056 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:58.255411 kubelet[2860]: E0114 01:12:58.255112 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:58.255411 kubelet[2860]: E0114 01:12:58.255365 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69bb8b8b79-mhzjk_calico-system(a73596b3-7664-4579-a0ee-92d4f6e8b5c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:58.255745 kubelet[2860]: E0114 01:12:58.255707 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:12:58.524106 containerd[1591]: time="2026-01-14T01:12:58.523962510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:58.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.154.109:22-4.153.228.146:49896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:58.694139 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:58.694231 kernel: audit: type=1130 audit(1768353178.691:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.154.109:22-4.153.228.146:49896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:58.692509 systemd[1]: Started sshd@15-143.198.154.109:22-4.153.228.146:49896.service - OpenSSH per-connection server daemon (4.153.228.146:49896). Jan 14 01:12:58.871242 containerd[1591]: time="2026-01-14T01:12:58.871169154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:58.873569 containerd[1591]: time="2026-01-14T01:12:58.873481761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:58.873792 containerd[1591]: time="2026-01-14T01:12:58.873632059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:58.875053 kubelet[2860]: E0114 01:12:58.874990 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:58.875258 kubelet[2860]: E0114 01:12:58.875065 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:58.875258 kubelet[2860]: E0114 01:12:58.875170 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-557db9b6c8-cw62p_calico-system(7517aecc-466a-4034-a381-03ea5ddb0673): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:58.875258 kubelet[2860]: E0114 01:12:58.875219 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:12:59.167000 audit[5066]: USER_ACCT pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.171918 sshd[5066]: Accepted publickey for core from 4.153.228.146 port 49896 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:12:59.177712 kernel: audit: type=1101 audit(1768353179.167:795): pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.179784 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:59.176000 audit[5066]: CRED_ACQ pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.188723 kernel: audit: type=1103 audit(1768353179.176:796): pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.176000 audit[5066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd226860 a2=3 a3=0 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:59.198711 kernel: audit: type=1006 audit(1768353179.176:797): pid=5066 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:12:59.199066 kernel: audit: type=1300 audit(1768353179.176:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefd226860 a2=3 a3=0 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:59.176000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:59.209598 kernel: audit: type=1327 audit(1768353179.176:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:59.217088 systemd-logind[1569]: New session 17 of user core. Jan 14 01:12:59.225102 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:12:59.242996 kernel: audit: type=1105 audit(1768353179.230:798): pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.230000 audit[5066]: USER_START pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.243000 audit[5070]: CRED_ACQ pid=5070 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.253733 kernel: audit: type=1103 audit(1768353179.243:799): pid=5070 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.547619 sshd[5070]: Connection closed by 4.153.228.146 port 49896 Jan 14 01:12:59.547110 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:59.552000 audit[5066]: USER_END pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.565228 kernel: audit: type=1106 audit(1768353179.552:800): pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.561226 systemd[1]: sshd@15-143.198.154.109:22-4.153.228.146:49896.service: Deactivated successfully. Jan 14 01:12:59.567770 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:12:59.571793 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:12:59.552000 audit[5066]: CRED_DISP pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.576906 systemd-logind[1569]: Removed session 17. Jan 14 01:12:59.580788 kernel: audit: type=1104 audit(1768353179.552:801): pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:12:59.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-143.198.154.109:22-4.153.228.146:49896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:00.525435 containerd[1591]: time="2026-01-14T01:13:00.525024229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:13:00.890394 containerd[1591]: time="2026-01-14T01:13:00.890318411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:00.892891 containerd[1591]: time="2026-01-14T01:13:00.892811877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:13:00.893116 containerd[1591]: time="2026-01-14T01:13:00.893009812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:00.894104 kubelet[2860]: E0114 01:13:00.893979 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:13:00.896693 kubelet[2860]: E0114 01:13:00.896592 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:13:00.897429 kubelet[2860]: E0114 01:13:00.897385 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8r79p_calico-system(374e8a11-39d9-48c5-bcf7-672988f566dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:00.897770 kubelet[2860]: E0114 01:13:00.897442 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:13:00.897863 containerd[1591]: time="2026-01-14T01:13:00.897629436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:13:01.208425 containerd[1591]: time="2026-01-14T01:13:01.208115884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:01.211652 containerd[1591]: time="2026-01-14T01:13:01.211484853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:13:01.211652 containerd[1591]: time="2026-01-14T01:13:01.211616555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:01.212376 kubelet[2860]: E0114 01:13:01.212214 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:01.212376 kubelet[2860]: E0114 01:13:01.212293 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:01.212827 kubelet[2860]: E0114 01:13:01.212792 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-56bxg_calico-apiserver(d9fd657d-8a4b-430b-8046-e1faf50ffec5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:01.212931 kubelet[2860]: E0114 01:13:01.212846 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:13:01.213591 containerd[1591]: time="2026-01-14T01:13:01.213544907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:13:01.572236 containerd[1591]: time="2026-01-14T01:13:01.572047128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:01.574807 containerd[1591]: time="2026-01-14T01:13:01.574644849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:13:01.574962 containerd[1591]: time="2026-01-14T01:13:01.574818725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:01.575902 kubelet[2860]: E0114 01:13:01.575774 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:13:01.575902 kubelet[2860]: E0114 01:13:01.575861 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:13:01.578704 kubelet[2860]: E0114 01:13:01.576310 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:01.581382 containerd[1591]: time="2026-01-14T01:13:01.581327484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:13:01.891889 containerd[1591]: time="2026-01-14T01:13:01.891643080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:01.894984 containerd[1591]: time="2026-01-14T01:13:01.894796674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:13:01.894984 containerd[1591]: time="2026-01-14T01:13:01.894927610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:01.896744 kubelet[2860]: E0114 01:13:01.895820 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:13:01.896744 kubelet[2860]: E0114 01:13:01.895880 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:13:01.896744 kubelet[2860]: E0114 01:13:01.896014 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-598dd_calico-system(9c51b2de-d8d9-4e42-af77-3bf2696395e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:01.896744 kubelet[2860]: E0114 01:13:01.896083 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:13:02.523136 containerd[1591]: time="2026-01-14T01:13:02.523082764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:13:02.847949 containerd[1591]: time="2026-01-14T01:13:02.847636128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:02.850907 containerd[1591]: time="2026-01-14T01:13:02.850721658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:13:02.850907 containerd[1591]: time="2026-01-14T01:13:02.850857870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:02.851150 kubelet[2860]: E0114 01:13:02.851083 2860 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:02.851225 kubelet[2860]: E0114 01:13:02.851151 2860 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:02.851967 kubelet[2860]: E0114 01:13:02.851828 2860 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dc87cd5bb-hhclj_calico-apiserver(c98f05e3-1f7b-4c82-ac93-d4261901eb6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:02.852113 kubelet[2860]: E0114 01:13:02.851993 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:13:04.662568 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:04.662868 kernel: audit: type=1130 audit(1768353184.644:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.154.109:22-4.153.228.146:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:04.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.154.109:22-4.153.228.146:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:04.645250 systemd[1]: Started sshd@16-143.198.154.109:22-4.153.228.146:56238.service - OpenSSH per-connection server daemon (4.153.228.146:56238). Jan 14 01:13:05.140000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.143357 sshd[5084]: Accepted publickey for core from 4.153.228.146 port 56238 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:05.150845 kernel: audit: type=1101 audit(1768353185.140:804): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.152000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.157106 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:05.162923 kernel: audit: type=1103 audit(1768353185.152:805): pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.171708 kernel: audit: type=1006 audit(1768353185.152:806): pid=5084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:13:05.152000 audit[5084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc132770c0 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:05.181762 systemd-logind[1569]: New session 18 of user core. Jan 14 01:13:05.187037 kernel: audit: type=1300 audit(1768353185.152:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc132770c0 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:05.190037 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:13:05.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:05.200752 kernel: audit: type=1327 audit(1768353185.152:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:05.203000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.213709 kernel: audit: type=1105 audit(1768353185.203:807): pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.206000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.222737 kernel: audit: type=1103 audit(1768353185.206:808): pid=5088 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.614722 sshd[5088]: Connection closed by 4.153.228.146 port 56238 Jan 14 01:13:05.642000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.616171 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:05.652702 kernel: audit: type=1106 audit(1768353185.642:809): pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.660494 kernel: audit: type=1104 audit(1768353185.642:810): pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.642000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:05.655319 systemd[1]: sshd@16-143.198.154.109:22-4.153.228.146:56238.service: Deactivated successfully. Jan 14 01:13:05.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-143.198.154.109:22-4.153.228.146:56238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:05.663169 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:13:05.667872 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:13:05.672306 systemd-logind[1569]: Removed session 18. Jan 14 01:13:10.523396 kubelet[2860]: E0114 01:13:10.523054 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:13:10.700054 systemd[1]: Started sshd@17-143.198.154.109:22-4.153.228.146:56244.service - OpenSSH per-connection server daemon (4.153.228.146:56244). Jan 14 01:13:10.713179 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:10.713243 kernel: audit: type=1130 audit(1768353190.699:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.154.109:22-4.153.228.146:56244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:10.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.154.109:22-4.153.228.146:56244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:11.185834 sshd[5102]: Accepted publickey for core from 4.153.228.146 port 56244 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:11.184000 audit[5102]: USER_ACCT pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.196111 kernel: audit: type=1101 audit(1768353191.184:813): pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.204301 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:11.200000 audit[5102]: CRED_ACQ pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.212714 kernel: audit: type=1103 audit(1768353191.200:814): pid=5102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.219899 kernel: audit: type=1006 audit(1768353191.200:815): pid=5102 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:13:11.200000 audit[5102]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd79e4c60 a2=3 a3=0 items=0 ppid=1 pid=5102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:11.236886 kernel: audit: type=1300 audit(1768353191.200:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd79e4c60 a2=3 a3=0 items=0 ppid=1 pid=5102 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:11.228772 systemd-logind[1569]: New session 19 of user core. Jan 14 01:13:11.200000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:11.243295 kernel: audit: type=1327 audit(1768353191.200:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:11.243012 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:13:11.248000 audit[5102]: USER_START pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.259894 kernel: audit: type=1105 audit(1768353191.248:816): pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.271807 kernel: audit: type=1103 audit(1768353191.260:817): pid=5106 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.260000 audit[5106]: CRED_ACQ pid=5106 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.544837 sshd[5106]: Connection closed by 4.153.228.146 port 56244 Jan 14 01:13:11.548256 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:11.563454 kernel: audit: type=1106 audit(1768353191.552:818): pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.552000 audit[5102]: USER_END pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.567541 systemd[1]: sshd@17-143.198.154.109:22-4.153.228.146:56244.service: Deactivated successfully. Jan 14 01:13:11.569326 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:13:11.575393 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:13:11.552000 audit[5102]: CRED_DISP pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.583837 kernel: audit: type=1104 audit(1768353191.552:819): pid=5102 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:11.584924 systemd-logind[1569]: Removed session 19. Jan 14 01:13:11.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-143.198.154.109:22-4.153.228.146:56244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:11.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-143.198.154.109:22-4.153.228.146:56258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:11.623147 systemd[1]: Started sshd@18-143.198.154.109:22-4.153.228.146:56258.service - OpenSSH per-connection server daemon (4.153.228.146:56258). Jan 14 01:13:12.011000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.014236 sshd[5118]: Accepted publickey for core from 4.153.228.146 port 56258 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:12.014000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.014000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff95e1c080 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:12.014000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:12.017920 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:12.033788 systemd-logind[1569]: New session 20 of user core. Jan 14 01:13:12.039082 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:13:12.048000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.052000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.529649 kubelet[2860]: E0114 01:13:12.529392 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:13:12.533513 kubelet[2860]: E0114 01:13:12.530560 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:13:12.660226 sshd[5122]: Connection closed by 4.153.228.146 port 56258 Jan 14 01:13:12.660711 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:12.668000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.671000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:12.676445 systemd[1]: sshd@18-143.198.154.109:22-4.153.228.146:56258.service: Deactivated successfully. Jan 14 01:13:12.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-143.198.154.109:22-4.153.228.146:56258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:12.680755 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:13:12.683152 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:13:12.687028 systemd-logind[1569]: Removed session 20. Jan 14 01:13:12.741587 systemd[1]: Started sshd@19-143.198.154.109:22-4.153.228.146:56272.service - OpenSSH per-connection server daemon (4.153.228.146:56272). Jan 14 01:13:12.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-143.198.154.109:22-4.153.228.146:56272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:13.190000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:13.191229 sshd[5132]: Accepted publickey for core from 4.153.228.146 port 56272 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:13.192000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:13.192000 audit[5132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0b171210 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:13.192000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:13.194392 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:13.207178 systemd-logind[1569]: New session 21 of user core. Jan 14 01:13:13.218271 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:13:13.225000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:13.228000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:13.521794 kubelet[2860]: E0114 01:13:13.521088 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:13:14.482719 sshd[5136]: Connection closed by 4.153.228.146 port 56272 Jan 14 01:13:14.481521 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:14.487000 audit[5146]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:14.487000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc796a8630 a2=0 a3=7ffc796a861c items=0 ppid=2970 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:14.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:14.490000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:14.490000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:14.504068 systemd[1]: sshd@19-143.198.154.109:22-4.153.228.146:56272.service: Deactivated successfully. Jan 14 01:13:14.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-143.198.154.109:22-4.153.228.146:56272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:14.507177 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:13:14.507701 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:13:14.517140 systemd-logind[1569]: Removed session 21. Jan 14 01:13:14.517000 audit[5146]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:14.522086 kubelet[2860]: E0114 01:13:14.522028 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:13:14.517000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc796a8630 a2=0 a3=0 items=0 ppid=2970 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:14.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:14.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-143.198.154.109:22-4.153.228.146:40342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:14.569382 systemd[1]: Started sshd@20-143.198.154.109:22-4.153.228.146:40342.service - OpenSSH per-connection server daemon (4.153.228.146:40342). Jan 14 01:13:15.021000 audit[5151]: USER_ACCT pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.025951 sshd[5151]: Accepted publickey for core from 4.153.228.146 port 40342 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:15.024000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.024000 audit[5151]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc718c6260 a2=3 a3=0 items=0 ppid=1 pid=5151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:15.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:15.027644 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:15.039143 systemd-logind[1569]: New session 22 of user core. Jan 14 01:13:15.046327 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:13:15.052000 audit[5151]: USER_START pid=5151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.058000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.570000 audit[5162]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:15.570000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd925c1410 a2=0 a3=7ffd925c13fc items=0 ppid=2970 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:15.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:15.576000 audit[5162]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:15.576000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd925c1410 a2=0 a3=0 items=0 ppid=2970 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:15.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:15.729711 sshd[5155]: Connection closed by 4.153.228.146 port 40342 Jan 14 01:13:15.732002 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:15.746870 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 01:13:15.747044 kernel: audit: type=1106 audit(1768353195.732:849): pid=5151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.732000 audit[5151]: USER_END pid=5151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.739696 systemd[1]: sshd@20-143.198.154.109:22-4.153.228.146:40342.service: Deactivated successfully. Jan 14 01:13:15.745191 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:13:15.754043 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:13:15.758611 systemd-logind[1569]: Removed session 22. Jan 14 01:13:15.733000 audit[5151]: CRED_DISP pid=5151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.768762 kernel: audit: type=1104 audit(1768353195.733:850): pid=5151 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:15.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-143.198.154.109:22-4.153.228.146:40342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:15.778074 kernel: audit: type=1131 audit(1768353195.738:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-143.198.154.109:22-4.153.228.146:40342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:15.800247 systemd[1]: Started sshd@21-143.198.154.109:22-4.153.228.146:40346.service - OpenSSH per-connection server daemon (4.153.228.146:40346). Jan 14 01:13:15.810368 kernel: audit: type=1130 audit(1768353195.798:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-143.198.154.109:22-4.153.228.146:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:15.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-143.198.154.109:22-4.153.228.146:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:16.262708 kernel: audit: type=1101 audit(1768353196.254:853): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.254000 audit[5167]: USER_ACCT pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.263017 sshd[5167]: Accepted publickey for core from 4.153.228.146 port 40346 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:16.263000 audit[5167]: CRED_ACQ pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.268219 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:16.272632 kernel: audit: type=1103 audit(1768353196.263:854): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.284554 kernel: audit: type=1006 audit(1768353196.263:855): pid=5167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:13:16.284752 kernel: audit: type=1300 audit(1768353196.263:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4083b180 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:16.263000 audit[5167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4083b180 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:16.263000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:16.288777 kernel: audit: type=1327 audit(1768353196.263:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:16.293991 systemd-logind[1569]: New session 23 of user core. Jan 14 01:13:16.304266 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:13:16.325846 kernel: audit: type=1105 audit(1768353196.313:856): pid=5167 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.313000 audit[5167]: USER_START pid=5167 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.328000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.529850 kubelet[2860]: E0114 01:13:16.526403 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:13:16.644733 sshd[5171]: Connection closed by 4.153.228.146 port 40346 Jan 14 01:13:16.647917 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:16.650000 audit[5167]: USER_END pid=5167 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.651000 audit[5167]: CRED_DISP pid=5167 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:16.655822 systemd[1]: sshd@21-143.198.154.109:22-4.153.228.146:40346.service: Deactivated successfully. Jan 14 01:13:16.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-143.198.154.109:22-4.153.228.146:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:16.660543 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:13:16.666562 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:13:16.670764 systemd-logind[1569]: Removed session 23. Jan 14 01:13:17.521036 kubelet[2860]: E0114 01:13:17.520913 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e" Jan 14 01:13:20.924335 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 01:13:20.924480 kernel: audit: type=1325 audit(1768353200.917:861): table=filter:141 family=2 entries=26 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:20.917000 audit[5184]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:20.917000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc28d45ec0 a2=0 a3=7ffc28d45eac items=0 ppid=2970 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:20.934721 kernel: audit: type=1300 audit(1768353200.917:861): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc28d45ec0 a2=0 a3=7ffc28d45eac items=0 ppid=2970 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:20.917000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:20.939708 kernel: audit: type=1327 audit(1768353200.917:861): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:20.935000 audit[5184]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:20.944704 kernel: audit: type=1325 audit(1768353200.935:862): table=nat:142 family=2 entries=104 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:13:20.935000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc28d45ec0 a2=0 a3=7ffc28d45eac items=0 ppid=2970 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:20.952690 kernel: audit: type=1300 audit(1768353200.935:862): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc28d45ec0 a2=0 a3=7ffc28d45eac items=0 ppid=2970 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:20.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:20.960707 kernel: audit: type=1327 audit(1768353200.935:862): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:13:21.725396 systemd[1]: Started sshd@22-143.198.154.109:22-4.153.228.146:40352.service - OpenSSH per-connection server daemon (4.153.228.146:40352). Jan 14 01:13:21.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.154.109:22-4.153.228.146:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:21.734884 kernel: audit: type=1130 audit(1768353201.723:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.154.109:22-4.153.228.146:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:22.119000 audit[5186]: USER_ACCT pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.121328 sshd[5186]: Accepted publickey for core from 4.153.228.146 port 40352 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:22.128713 kernel: audit: type=1101 audit(1768353202.119:864): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.131733 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:22.128000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.142699 kernel: audit: type=1103 audit(1768353202.128:865): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.148474 systemd-logind[1569]: New session 24 of user core. Jan 14 01:13:22.156699 kernel: audit: type=1006 audit(1768353202.128:866): pid=5186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:13:22.158838 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:13:22.128000 audit[5186]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd73994a10 a2=3 a3=0 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:22.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:22.162000 audit[5186]: USER_START pid=5186 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.166000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.432008 sshd[5190]: Connection closed by 4.153.228.146 port 40352 Jan 14 01:13:22.433553 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:22.434000 audit[5186]: USER_END pid=5186 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.434000 audit[5186]: CRED_DISP pid=5186 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:22.440821 systemd[1]: sshd@22-143.198.154.109:22-4.153.228.146:40352.service: Deactivated successfully. Jan 14 01:13:22.441544 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:13:22.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-143.198.154.109:22-4.153.228.146:40352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:22.446113 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:13:22.451799 systemd-logind[1569]: Removed session 24. Jan 14 01:13:22.524560 kubelet[2860]: E0114 01:13:22.524506 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69bb8b8b79-mhzjk" podUID="a73596b3-7664-4579-a0ee-92d4f6e8b5c2" Jan 14 01:13:24.524017 kubelet[2860]: E0114 01:13:24.523952 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-56bxg" podUID="d9fd657d-8a4b-430b-8046-e1faf50ffec5" Jan 14 01:13:25.521422 kubelet[2860]: E0114 01:13:25.521311 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-557db9b6c8-cw62p" podUID="7517aecc-466a-4034-a381-03ea5ddb0673" Jan 14 01:13:25.522192 kubelet[2860]: E0114 01:13:25.521757 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-598dd" podUID="9c51b2de-d8d9-4e42-af77-3bf2696395e2" Jan 14 01:13:27.533475 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:13:27.533710 kernel: audit: type=1130 audit(1768353207.524:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.154.109:22-4.153.228.146:44346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:27.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.154.109:22-4.153.228.146:44346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:27.524827 systemd[1]: Started sshd@23-143.198.154.109:22-4.153.228.146:44346.service - OpenSSH per-connection server daemon (4.153.228.146:44346). Jan 14 01:13:28.040355 sshd[5227]: Accepted publickey for core from 4.153.228.146 port 44346 ssh2: RSA SHA256:XqzSQld4V827gjdloadE8nSbb31nu4wq54pxdhl7WcI Jan 14 01:13:28.039000 audit[5227]: USER_ACCT pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.049698 kernel: audit: type=1101 audit(1768353208.039:873): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.054581 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:28.051000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.065881 kernel: audit: type=1103 audit(1768353208.051:874): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.074938 kernel: audit: type=1006 audit(1768353208.051:875): pid=5227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:13:28.051000 audit[5227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f05f4d0 a2=3 a3=0 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:28.075859 systemd-logind[1569]: New session 25 of user core. Jan 14 01:13:28.084732 kernel: audit: type=1300 audit(1768353208.051:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9f05f4d0 a2=3 a3=0 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:28.051000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:28.094697 kernel: audit: type=1327 audit(1768353208.051:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:28.092083 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:13:28.098000 audit[5227]: USER_START pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.110705 kernel: audit: type=1105 audit(1768353208.098:876): pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.111000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.119715 kernel: audit: type=1103 audit(1768353208.111:877): pid=5231 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.524308 kubelet[2860]: E0114 01:13:28.524256 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:13:28.651721 sshd[5231]: Connection closed by 4.153.228.146 port 44346 Jan 14 01:13:28.653035 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:28.658000 audit[5227]: USER_END pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.664613 systemd[1]: sshd@23-143.198.154.109:22-4.153.228.146:44346.service: Deactivated successfully. Jan 14 01:13:28.669835 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:13:28.670708 kernel: audit: type=1106 audit(1768353208.658:878): pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.658000 audit[5227]: CRED_DISP pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.674912 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:13:28.678592 systemd-logind[1569]: Removed session 25. Jan 14 01:13:28.683838 kernel: audit: type=1104 audit(1768353208.658:879): pid=5227 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 14 01:13:28.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-143.198.154.109:22-4.153.228.146:44346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:30.523110 kubelet[2860]: E0114 01:13:30.522950 2860 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 14 01:13:30.525559 kubelet[2860]: E0114 01:13:30.525518 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r79p" podUID="374e8a11-39d9-48c5-bcf7-672988f566dc" Jan 14 01:13:31.523453 kubelet[2860]: E0114 01:13:31.521049 2860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dc87cd5bb-hhclj" podUID="c98f05e3-1f7b-4c82-ac93-d4261901eb6e"